Abstract In this article, we develop a novel microfoundation for the dynamics of “hot” real estate markets, emphasizing the strategic role of list prices. Our model features partial seller commitment: sellers must accept offers at or above the list price but retain discretion to reject lower bids. This institutional feature gives rise to two key forces in equilibrium: a bid inflation effect , in which high‐valuation buyers inflate their bids to improve their chance of acceptance, and a bid discouragement effect , in which marginal buyers strategically drop their bids well below list to avoid near‐list rejections. These effects create a discontinuity in the bidding function and help account for why bids just below list are not often observed. We show that this behavior results in higher list prices, increased sales prices, and a higher sales‐to‐list price ratio as the number of buyers increases—key indicators of a hot market. Unlike many traditional models, our framework allows for sales at, above, or below list price and sheds light on how bidding behavior and pricing respond to market conditions.
Yuval David, Fabiana Fournier, Lior Limonad
et al.
Causal reasoning is essential for business process interventions and improvement, requiring a clear understanding of causal relationships among activity execution times in an event log. Recent work introduced a method for discovering causal process models but lacked the ability to capture alternating causal conditions across multiple variants. This raises the challenges of handling missing values and expressing the alternating conditions among log splits when blending traces with varying activities. We propose a novel method to unify multiple causal process variants into a consistent model that preserves the correctness of the original causal models, while explicitly representing their causal-flow alternations. The method is formally defined, proved, evaluated on three open and two proprietary datasets, and released as an open-source implementation.
Modern business and economic datasets often exhibit nonlinear, multi-scale structures that traditional linear tools under-represent. Topological Data Analysis (TDA) offers a geometric lens for uncovering robust patterns, such as connected components, loops and voids, across scales. This paper provides an intuitive, figure-driven introduction to persistent homology and a practical, reproducible TDA pipeline for applied analysts. Through comparative case studies in consumer behavior, equity markets (SAX/eSAX vs.\ TDA) and foreign exchange dynamics, we demonstrate how topological features can reveal segmentation patterns and structural relationships beyond classical statistical methods. We discuss methodological choices regarding distance metrics, complex construction and interpretation, and we introduce the \textit{Topological Stability Index} (TSI), a simple yet interpretable indicator of structural variability derived from persistence lifetimes. We conclude with practical guidelines for TDA implementation, visualization and communication in business and economic analytics.
In a rapidly evolving landscape marked by continuous change and complex challenges, effective cash management stands as a cornerstone for ensuring business sustainability and driving performance. To address these pressing demands, cash managersare increasingly turning to innovative financing solutions such as venture capital, green finance, crowdfunding, advanced services from Pan-African banks, and blockchain technology. These cutting-edge tools are pivotal in bolstering resilience against market volatility, ecological transitions, and the accelerating pace of technological change. The present article aims to examine how such innovative financial approaches can serve as strategic drivers, enabling businesses to transform challenges into opportunities. The analysis underscores that rethinking cash management through innovation is a critical pathway toboost the performance of Moroccan companies. Therefore, embracing these forward-thinking strategies unlocks new avenues for development empowering them to adapt with agility amidst the uncertainties of a shifting environment.
In business process simulation, resource availability is typically modeled by assigning a calendar to each resource, e.g., Monday-Friday, 9:00-18:00. Resources are assumed to be always available during each time slot in their availability calendar. This assumption often becomes invalid due to interruptions, breaks, or time-sharing across processes. In other words, existing approaches fail to capture intermittent availability. Another limitation of existing approaches is that they either do not consider multitasking behavior, or if they do, they assume that resources always multitask (up to a maximum capacity) whenever available. However, studies have shown that the multitasking patterns vary across days. This paper introduces a probabilistic approach to model resource availability and multitasking behavior for business process simulation. In this approach, each time slot in a resource calendar has an associated availability probability and a multitasking probability per multitasking level. For example, a resource may be available on Fridays between 14:00-15:00 with 90\% probability, and given that they are performing one task during this slot, they may take on a second concurrent task with 60\% probability. We propose algorithms to discover probabilistic calendars and probabilistic multitasking capacities from event logs. An evaluation shows that, with these enhancements, simulation models discovered from event logs better replicate the distribution of activities and cycle times, relative to approaches with crisp calendars and monotasking assumptions.
In recent years, the challenge of extracting information from business documents has emerged as a critical task, finding applications across numerous domains. This effort has attracted substantial interest from both industry and academy, highlighting its significance in the current technological landscape. Most datasets in this area are primarily focused on Key Information Extraction (KIE), where the extraction process revolves around extracting information using a specific, predefined set of keys. Unlike most existing datasets and benchmarks, our focus is on discovering key-value pairs (KVPs) without relying on predefined keys, navigating through an array of diverse templates and complex layouts. This task presents unique challenges, primarily due to the absence of comprehensive datasets and benchmarks tailored for non-predetermined KVP extraction. To address this gap, we introduce KVP10k , a new dataset and benchmark specifically designed for KVP extraction. The dataset contains 10707 richly annotated images. In our benchmark, we also introduce a new challenging task that combines elements of KIE as well as KVP in a single task. KVP10k sets itself apart with its extensive diversity in data and richly detailed annotations, paving the way for advancements in the field of information extraction from complex business documents.
Gabriel Cáceres-Aravena, Bastián Real, Diego Guzmán-Silva
et al.
Transfer of information between topological edge states is a robust way of spatially manipulating quantum states while preserving their coherence in lattice environments. This method is particularly efficient when the edge modes are kept within the topological gap of the lattice during the transfer. In this work we show experimentally the transfer of photonic modes between topological edge states located at opposite ends of a dimerized one-dimensional photonic lattice. We use a diamond lattice of coupled waveguides and show that the transfer is insensitive both to the presence of a high density of states in the form of a flat band at an energy close to that of the edge states, and to the presence of disorder in the hoppings. We explore dynamics in the waveguide lattice using wavelength-scan method, where different input wavelength translates into different effective waveguide length. These results open the way to the implementation of more efficient protocols based on the active driving of the hoppings.
Mauricio Jacobo-Romero, Danilo S. Carvalho, Andre Freitas
In this work, we examined Business Process (BP) production as a signal; this novel approach explores a BP workflow as a linear time-invariant (LTI) system. We analysed BP productivity in the frequency domain; this standpoint examines how labour and capital act as BP input signals and how their fundamental frequencies affect BP production. Our research also proposes a simulation framework of a BP in the frequency domain for estimating productivity gains due to the introduction of automation steps. Our ultimate goal was to supply evidence to address Solow's Paradox.
Let $K/\mathbb{Q}$ be a real cyclic extension of degree divisible by $p$. We analyze the {\it statement} of the "Real Abelian Main Conjecture", for the $p$-class group $\mathcal{H}_K$ of $K$, in this non semi-simple case. The classical {\it algebraic} definition of the $p$-adic isotopic components $\mathcal{H}^{\rm alg}_{K,\varphi}$, for irreducible $p$-adic characters $\varphi$, is inappropriate with respect to analytical formulas, because of capitulation of $p$-classes in the $p$-sub-extension of $K/\mathbb{Q}$. In the 1970's we have given an {\it arithmetic} definition, $\mathcal{H}^{\rm ar}_{K,\varphi}$, and formulated the conjecture, still unproven, $\# \mathcal{H}^{\rm ar}_{K,\varphi} = \# (\mathcal{E}_K / \mathcal{E}^\circ_K \, \mathcal{F}_{\!K})_{\varphi_0}$, in terms of units $\mathcal{E}_K$ then $\mathcal{E}^\circ_K$ (generated by units of the strict subfields of $K$) and cyclotomic units $\mathcal{F}_K$, where $\varphi_0$ is the tame part of $\varphi$. We prove that the conjecture holds as soon as there exists a prime $\ell$, totally inert in $K$, such that $\mathcal{H}_K$ capitulates in $K(μ_\ell^{})$, existence having been checked, in various circumstances, as a promising new tool.
Despite the remarkable advancements in machine translation, the current sentence-level paradigm faces challenges when dealing with highly-contextual languages like Japanese. In this paper, we explore how context-awareness can improve the performance of the current Neural Machine Translation (NMT) models for English-Japanese business dialogues translation, and what kind of context provides meaningful information to improve translation. As business dialogue involves complex discourse phenomena but offers scarce training resources, we adapted a pretrained mBART model, finetuning on multi-sentence dialogue data, which allows us to experiment with different contexts. We investigate the impact of larger context sizes and propose novel context tokens encoding extra-sentential information, such as speaker turn and scene type. We make use of Conditional Cross-Mutual Information (CXMI) to explore how much of the context the model uses and generalise CXMI to study the impact of the extra-sentential context. Overall, we find that models leverage both preceding sentences and extra-sentential context (with CXMI increasing with context size) and we provide a more focused analysis on honorifics translation. Regarding translation quality, increased source-side context paired with scene and speaker information improves the model performance compared to previous work and our context-agnostic baselines, measured in BLEU and COMET metrics.
The field of Automatic Machine Learning (AutoML) has recently attained impressive results, including the discovery of state-of-the-art machine learning solutions, such as neural image classifiers. This is often done by applying an evolutionary search method, which samples multiple candidate solutions from a large space and evaluates the quality of each candidate through a long training process. As a result, the search tends to be slow. In this paper, we show that large efficiency gains can be obtained by employing a fast unified functional hash, especially through the functional equivalence caching technique, which we also present. The central idea is to detect by hashing when the search method produces equivalent candidates, which occurs very frequently, and this way avoid their costly re-evaluation. Our hash is "functional" in that it identifies equivalent candidates even if they were represented or coded differently, and it is "unified" in that the same algorithm can hash arbitrary representations; e.g. compute graphs, imperative code, or lambda functions. As evidence, we show dramatic improvements on multiple AutoML domains, including neural architecture search and algorithm discovery. Finally, we consider the effect of hash collisions, evaluation noise, and search distribution through empirical analysis. Altogether, we hope this paper may serve as a guide to hashing techniques in AutoML.
The busy period length distribution function knowledge is important for any queue system, and for the MGINF queue. But the mathematical expressions are in general very complicated, with a few exceptions, involving usually infinite sums and multiple convolutions. So, in this work are deduced some bounds for the MMINF system busy period length distribution function, meaning the second M exponential service time, which analytic expressions are simpler than the exact one. As a consequence, also some bounds for the MMINF system busy cycle length distribution function are presented.
Verifying temporal compliance rules, such as a rule stating that an inquiry must be answered within a time limit, is a recurrent operation in the realm of business process compliance. In this setting, a typical use case is one where a manager seeks to retrieve all cases where a temporal rule is violated, given an event log recording the execution of a process over a time period. Existing approaches for checking temporal rules require a full scan of the log. Such approaches are unsuitable for interactive use when the log is large and the set of compliance rules is evolving. This paper proposes an approach to evaluate temporal compliance rules in sublinear time by pre-computing a data structure that summarizes the temporal relations between activities in a log. The approach caters for a wide range of temporal compliance patterns and supports incremental updates. Our evaluation on twenty real-life logs shows that our data structure allows for real-time checking of a large set of compliance rules.
Andreas Rumsch, Christoph Imboden, Alberto Calatroni
et al.
More and more household appliances connect to the Internet and exchange data freely. This is the foundation for true smart buildings. However, there is still no uniform communication technology available, which can connect all appliances from all vendors. Protocols differ between manufacturers making interoperability difficult or even impossible. Manufacturers cannot rely on a reference for the implementation and real estate developers and operators are reluctant to commit to a system until it is clear which one will prevail. A similar situation is evident in smart grids and applies equally to the energy supply industry. This fragmentation ultimately leads to missed opportunities in terms of business models which could connect customers with service providers. We present a first draft of an architecture: SINA - Smart Interoperability Architecture. SINA is based on existing decentralized infrastructure, which avoids creating a dependency of the market participants on an overpowering service provider. The core element of the technical solution is an open-source module integrated in the private clouds of the manufacturers, energy suppliers and service providers. The architecture addresses problems of data ownership, privacy and data security avoiding central administrative structures. It manages data access and transfer in a decentralized and distributed system. SINA uses a blockchain and smart contracts to make sure that the pieces of information about which data are accessed, by whom they are accessed, how they are processed, and which monetary transactions take place are immutably stored and made available. This allows providers to offer services to users in a transparent and trustworthy manner. Finally, SINA includes a matchmaking block which helps service providers find potential customers and vice versa. This set of features makes SINA unique.
Qingqing Cao, Oriana Riva, Aruna Balasubramanian
et al.
We present BewQA, a system specifically designed to answer a class of questions that we call Bew questions. Bew questions are related to businesses/services such as restaurants, hotels, and movie theaters; for example, "Until what time is happy hour?". These questions are challenging to answer because the answers are found in open-domain Web, are present in short sentences without surrounding context, and are dynamic since the webpage information can be updated frequently. Under these conditions, existing QA systems perform poorly. We present a practical approach, called BewQA, that can answer Bew queries by mining a template of the business-related webpages and using the template to guide the search. We show how we can extract the template automatically by leveraging aggregator websites that aggregate information about business entities in a domain (e.g., restaurants). We answer a given question by identifying the section from the extracted template that is most likely to contain the answer. By doing so we can extract the answers even when the answer span does not have sufficient context. Importantly, BewQA does not require any training. We crowdsource a new dataset of 1066 Bew questions and ground-truth answers in the restaurant domain. Compared to state-of-the-art QA models, BewQA has a 27 percent point improvement in F1 score. Compared to a commercial search engine, BewQA answered correctly 29% more Bew questions.
Marcus Fischer, Adrian Hofmann, Florian Imgrund
et al.
Digital transformation forces companies to rethink their processes to meet current customer needs. Business Process Management (BPM) can provide the means to structure and tackle this change. However, most approaches to BPM face restrictions on the number of processes they can optimize at a time due to complexity and resource restrictions. Investigating this shortcoming, the concept of the long tail of business processes suggests a hybrid approach that entails managing important processes centrally, while incrementally improving the majority of processes at their place of execution. This study scrutinizes this observation as well as corresponding implications. First, we define a system of indicators to automatically prioritize processes based on execution data. Second, we use process mining to analyze processes from multiple companies to investigate the distribution of process value in terms of their process variants. Third, we examine the characteristics of the process variants contained in the short head and the long tail to derive and justify recommendations for their management. Our results suggest that the assumption of a long-tailed distribution holds across companies and indicators and also applies to the overall improvement potential of processes and their variants. Across all cases, process variants in the long tail were characterized by fewer customer contacts, lower execution frequencies, and a larger number of involved stakeholders, making them suitable candidates for distributed improvement.
Ankit Agrawal, Renato Mancuso, Rodolfo Pellizzoni
et al.
One of the primary sources of unpredictability in modern multi-core embedded systems is contention over shared memory resources, such as caches, interconnects, and DRAM. Despite significant achievements in the design and analysis of multi-core systems, there is a need for a theoretical framework that can be used to reason on the worst-case behavior of real-time workload when both processors and memory resources are subject to scheduling decisions. In this paper, we focus our attention on dynamic allocation of main memory bandwidth. In particular, we study how to determine the worst-case response time of tasks spanning through a sequence of time intervals, each with a different bandwidth-to-core assignment. We show that the response time computation can be reduced to a maximization problem over assignment of memory requests to different time intervals, and we provide an efficient way to solve such problem. As a case study, we then demonstrate how our proposed analysis can be used to improve the schedulability of Integrated Modular Avionics systems in the presence of memory-intensive workload.
Process-aware Recommender systems can provide critical decision support functionality to aid business process execution by recommending what actions to take next. Based on recent advances in the field of deep learning, we present a novel memory-augmented neural network (MANN) based approach for constructing a process-aware recommender system. We propose a novel network architecture, namely Write-Protected Dual Controller Memory-Augmented Neural Network (DCw-MANN), for building prescriptive models. To evaluate the feasibility and usefulness of our approach, we consider three real-world datasets and show that our approach leads to better performance on several baselines for the task of suffix recommendation and next task prediction.
The main role of the ITER Radial Neutron Camera (RNC) diagnostic is to measure in real-time the plasma neutron emissivity profile at high peak count rates for a time duration up to 500 s. Due to the unprecedented high performance conditions and after the identification of critical problems, a set of activities have been selected, focused on the development of high priority prototypes, capable to deliver answers to those problems before the final RNC design. This paper presents one of the selected activities: the design, development and testing of a dedicated FPGA code for the RNC Data Acquisition prototype. The FPGA code aims to acquire, process and store in real-time the neutron and gamma pulses from the detectors located in collimated lines of sight viewing a poloidal plasma section from the ITER Equatorial Port Plug 1. The hardware platform used was an evaluation board from Xilinx (KC705) carrying an IPFN FPGA Mezzanine Card (FMC-AD2-1600) with 2 digitizer channels of 12-bit resolution sampling up to 1.6 GSamples/s. The code performs the proper input signal conditioning using a down-sampled configuration to 400 MSamples/s, apply dedicated algorithms for pulse detection, filtering and pileup detection, and includes two distinct data paths operating simultaneously: i) the event-based data-path for pulse storage; and ii) the real-time processing, with dedicated algorithms for pulse shape discrimination and pulse height spectra. For continuous data throughput both data-paths are streamed to the host through two distinct PCIe x8 Direct Memory Access (DMA) channels.