Current Information Seeking (InfoSeeking) agents struggle to maintain focus and coherence during long-horizon exploration, as tracking search states, including planning procedure and massive search results, within one plain-text context is inherently fragile. To address this, we introduce \textbf{Table-as-Search (TaS)}, a structured planning framework that reformulates the InfoSeeking task as a Table Completion task. TaS maps each query into a structured table schema maintained in an external database, where rows represent search candidates and columns denote constraints or required information. This table precisely manages the search states: filled cells strictly record the history and search results, while empty cells serve as an explicit search plan. Crucially, TaS unifies three distinct InfoSeeking tasks: Deep Search, Wide Search, and the challenging DeepWide Search. Extensive experiments demonstrate that TaS significantly outperforms numerous state-of-the-art baselines across three kinds of benchmarks, including multi-agent framework and commercial systems. Furthermore, our analysis validates the TaS's superior robustness in long-horizon InfoSeeking, alongside its efficiency, scalability and flexibility. Code and datasets are publicly released at https://github.com/AIDC-AI/Marco-Search-Agent.
Illicit massage businesses (IMBs) masquerade as legitimate massage parlors while facilitating commercial sex and human trafficking. Law enforcement must identify these businesses within a dense population of lawful establishments, but investigative resources are limited and the illicit status of each location is unknown until inspection. Detection methods based on online reviews offer some insight, yet operators can manipulate these signals, leaving covert establishments undetected. IMBs constitute one of the largest segments of indoor sex trafficking in the United States, with an estimated 9,000 establishments. Mobility data offers an alternative to online signals, covering establishments that avoid digital visibility entirely. We derive features from mobility data spanning temporal visitation patterns, dwell times, visitor catchment areas, and demand stability. Because confirmed labels exist only for establishments identified through advertising platforms, we employ positive-unlabeled learning to address the label asymmetry in ground truth. The model achieves 0.97 AUC and 0.84 Average Precision. Four operational signatures characterize high-risk establishments: demand consistency, evening-concentrated visits, compressed service durations, and locally drawn clientele. The model produces risk scores for each business-week observation. Aggregating to the business level, prioritizing the highest-risk 10% of massage establishments captures 53% of known illicit operations, a 5.3-fold improvement over uninformed inspection. We develop a decision-support system that produces calibrated prioritization scores for law enforcement, enabling investigators to concentrate inspections on the highest-risk venues. The operational signatures may resist strategic manipulation because they reflect actual operations rather than online signals that operators can control.
Ana S. Dobrota, Natalia V. Skorodumova, Igor A. Pašti
The adsorption of single atoms on pristine and defected hexagonal boron nitride (h-BN) was systematically investigated using density functional theory. Elements from the first three rows of the periodic table, together with selected transition and coinage metals, were examined on the pristine surface and at boron- and nitrogen-vacancy sites. On pristine h-BN, adsorption is generally weak and dominated by dispersion forces, with measurable chemisorption limited to highly electronegative atoms such as C, O, and F. The introduction of vacancies transforms h-BN into a chemically active material, increasing adsorption energies by one to two orders of magnitude. The boron vacancy strongly stabilizes metallic and electropositive species through coordination to undercoordinated nitrogen atoms, whereas the nitrogen vacancy selectively binds electronegative and covalent adsorbates. Scaling of adsorption energies with elemental cohesive energies distinguishes regimes of physisorption, chemisorption, and substitutional stabilization. These insights provide a unified description of adsorption trends across the periodic table and establish defect engineering as an effective strategy for tailoring the catalytic, sensing, and electronic properties of h-BN.
Christian Skafte Beck Clausen, Bo Nørregaard Jørgensen, Zheng Grace Ma
Facing economic challenges due to the diverse objectives of businesses, and consumers, commercial greenhouses strive to minimize energy costs while addressing CO2 emissions. This scenario is intensified by rising energy costs and the global imperative to curtail CO2 emissions. To address these dynamic economic challenges, this paper proposes an architectural design for an energy economic dispatch testbed for commercial greenhouses. Utilizing the Attribute-Driven De-sign method, core architectural components of a software-in-the-loop testbed are proposed which emphasizes modularity and careful consideration of the multi-objective optimization problem. This approach extends prior research by implementing a modular multi-objective optimization framework in Java. The results demonstrate the successful integration of the CO2 reduction objective within the modular architecture with minimal effort. The multi-objective optimization output can also be employed to examine cost and CO2 objectives, ultimately serving as a valuable decision-support tool. The novel testbed architecture and a modular approach can tackle the multi-objective optimization problem and enable commercial greenhouses to navigate the intricate landscape of energy cost and CO2 emissions management.
Georges Dupret, Konstantin Sozinov, Carmen Barcena Gonzalez
et al.
Making ideal decisions as a product leader in a web-facing company is extremely difficult. In addition to navigating the ambiguity of customer satisfaction and achieving business goals, one must also pave a path forward for ones' products and services to remain relevant, desirable, and profitable. Data and experimentation to test product hypotheses are key to informing product decisions. Online controlled experiments by A/B testing may provide the best data to support such decisions with high confidence, but can be time-consuming and expensive, especially when one wants to understand impact to key business metrics such as retention or long-term value. Offline experimentation allows one to rapidly iterate and test, but often cannot provide the same level of confidence, and cannot easily shine a light on impact on business metrics. We introduce a novel, lightweight, and flexible approach to investigating hypotheses, called scenario analysis, that aims to support product leaders' decisions using data about users and estimates of business metrics. Its strengths are that it can provide guidance on trade-offs that are incurred by growing or shifting consumption, estimate trends in long-term outcomes like retention and other important business metrics, and can generate hypotheses about relationships between metrics at scale.
In dit artikel wordt uiteengezet welke informatie over sociale aspecten door Nederlandse beursgenoteerde ondernemingen wordt gepubliceerd in hun jaarverslagen over 2022. Het getoonde onderzoek gaat over de rapportages van de 75 hoofdfondsen van de AEX, AMX en AScX. Voor een groot deel van deze ondernemingen is verslaggeving over niet-financiële informatie verplicht (AE 2018; EU 2014), maar die verplichting kent geen heldere voorschriften over het rapporteren over sociale aspecten, anders dan generieke regels (EC 2014). Vrijwillig rapporteren hierover draagt bij aan de maatschappelijke legitimiteit en de ‘licence to operate’ van ondernemingen. Dit onderzoek laat zien dat veel ondernemingen al rapporteren over sociale aspecten, nog voordat dit verplicht wordt gesteld onder de CSRD. Daarbij zien we grote verschillen bij de detailaspecten. Harde conclusies kunnen door de manier van onderzoeken niet worden getrokken.
Business, Business mathematics. Commercial arithmetic. Including tables, etc.
A mixed arithmetic-mean, geometric-mean inequality was conjectured by F. Holland and proved by K. Kedlaya. In this note, we prove a mixed arithmetic-mean, harmonic-mean inequality and a mixed geometric-mean, harmonic-mean, and a more extended inequality: a mixed arithmetic-mean, geometric-mean, harmonic-mean inequality.
The paper is a continuation of another paper (https://philpapers.org/rec/PENFLT-2) published as Part I. Now, the case of “n=3” is inferred as a corollary from the Kochen and Specker theorem (1967): the eventual solutions of Fermat’s equation for “n=3” would correspond to an admissible disjunctive division of qubit into two absolutely independent parts therefore versus the contextuality of any qubit, implied by the Kochen – Specker theorem. Incommensurability (implied by the absence of hidden variables) is considered as dual to quantum contextuality. The relevant mathematical structure is Hilbert arithmetic in a wide sense (https://dx.doi.org/10.2139/ssrn.3656179), in the framework of which Hilbert arithmetic in a narrow sense and the qubit Hilbert space are dual to each other. A few cases involving set theory are possible: (1) only within the case “n=3” and implicitly, within any next level of “n” in Fermat’s equation; (2) the identification of the case “n=3” and the general case utilizing the axiom of choice rather than the axiom of induction. If the former is the case, the application of set theory and arithmetic can remain disjunctively divided: set theory, “locally”, within any level; and arithmetic, “globally”, to all levels. If the latter is the case, the proof is thoroughly within set theory. Thus, the relevance of Yablo’s paradox to the statement of Fermat’s last theorem is avoided in both cases. The idea of “arithmetic mechanics” is sketched: it might deduce the basic physical dimensions of mechanics (mass, time, distance) from the axioms of arithmetic after a relevant generalization, Furthermore, a future Part III of the paper is suggested: FLT by mediation of Hilbert arithmetic in a wide sense can be considered as another expression of Gleason’s theorem in quantum mechanics: the exclusions about (n = 1, 2) in both theorems as well as the validity for all the rest values of “n” can be unified after the theory of quantum information. The availability (respectively, non-availability) of solutions of Fermat’s equation can be proved as equivalent to the non-availability (respectively, availability) of a single probabilistic measure as to Gleason’s theorem.
This paper purpose is to investigate exponential behavior conditions for the infinite servers queue with Poisson arrivals busy period length distribution. It is presented a general theoretical result that is the basis of this work. The complementary analysis rely on the infinite servers queue with Poisson arrivals busy period length distribution moments computation. In infinite servers queue with Poisson arrivals practical applications, in economic, management, and business areas, the management of the effective number of servers is essential since the physical presence of infinite servers is not viable and so it is necessary to create that condition through an adequate management of the number of servers during the busy period.
Farbod Taymouri, Marcello La Rosa, Marlon Dumas
et al.
Process variant analysis aims at identifying and addressing the differences existing in a set of process executions enacted by the same process model. A process model can be executed differently in different situations for various reasons, e.g., the process could run in different locations or seasons, which gives rise to different behaviors. Having intuitions about the discrepancies in process behaviors, though challenging, is beneficial for managers and process analysts since they can improve their process models efficiently, e.g., via interactive learning or adapting mechanisms. Several methods have been proposed to tackle the problem of uncovering discrepancies in process executions. However, because of the interdisciplinary nature of the challenge, the methods and sorts of analysis in the literature are very heterogeneous. This article not only presents a systematic literature review and taxonomy of methods for variant analysis of business processes but also provides a methodology including the required steps to apply this type of analysis for the identification of variants in business process executions.
Demand response has been implemented by distribution system operators to reduce peak demand and mitigate contingency issues on distribution lines and substations. Specifically, the campus based commercial buildings make the major contributions to peak demand in a distribution system. Note that prior works neglect the consumers comfort level in performing demand response, which limits their applications as the incentives are not worth as compared to the loss in comfort levels for most time. Thus, a framework to comprehensively consider both operating costs and comfort levels is necessary. Moreover, distributed energy resources are widely deployed in commercial buildings such as roof top solar panels, plug in electric vehicles, and energy storage units, which bring various uncertainties to the distribution systems, i.e., output of renewable; electricity prices; arrival and departure of plug-in electric vehicles; business hour demand response signals and flexible energy demand. In this paper, we propose an optimal demand response framework to enable local control of demand-side appliances that are usually too small to participate in a retail electricity market. Several typical small demand side appliances, i.e., heating, ventilation, and air conditioning systems, electric water heaters and plug-in electric vehicles, are considered in our proposed model. Their operations are coordinated by a central controller, whose objective is to minimize the total cost and maximize the customers comfort levels for multiple commercial buildings. A scenario-based stochastic programming is leveraged to handle the aforementioned uncertainties. Numerical results based on the practical data demonstrate the effectiveness of the proposed framework. In addition, the trade off between the operation costs of commercial buildings and customers comfort levels is illustrated.
In this paper, we present a generative model to generate a natural language sentence describing a table region, e.g., a row. The model maps a row from a table to a continuous vector and then generates a natural language sentence by leveraging the semantics of a table. To deal with rare words appearing in a table, we develop a flexible copying mechanism that selectively replicates contents from the table in the output sequence. Extensive experiments demonstrate the accuracy of the model and the power of the copying mechanism. On two synthetic datasets, WIKIBIO and SIMPLEQUESTIONS, our model improves the current state-of-the-art BLEU-4 score from 34.70 to 40.26 and from 33.32 to 39.12, respectively. Furthermore, we introduce an open-domain dataset WIKITABLETEXT including 13,318 explanatory sentences for 4,962 tables. Our model achieves a BLEU-4 score of 38.23, which outperforms template based and language model based approaches.
Business Intelligence and Analytics (BI&A) is the process of extracting and predicting business-critical insights from data. Traditional BI focused on data collection, extraction, and organization to enable efficient query processing for deriving insights from historical data. With the rise of big data and cloud computing, there are many challenges and opportunities for the BI. Especially with the growing number of data sources, traditional BI\&A are evolving to provide intelligence at different scales and perspectives - operational BI, situational BI, self-service BI. In this survey, we review the evolution of business intelligence systems in full scale from back-end architecture to and front-end applications. We focus on the changes in the back-end architecture that deals with the collection and organization of the data. We also review the changes in the front-end applications, where analytic services and visualization are the core components. Using a uses case from BI in Healthcare, which is one of the most complex enterprises, we show how BI\&A will play an important role beyond the traditional usage. The survey provides a holistic view of Business Intelligence and Analytics for anyone interested in getting a complete picture of the different pieces in the emerging next generation BI\&A solutions.
Connecting different text attributes associated with the same entity (conflation) is important in business data analytics since it could help merge two different tables in a database to provide a more comprehensive profile of an entity. However, the conflation task is challenging because two text strings that describe the same entity could be quite different from each other for reasons such as misspelling. It is therefore critical to develop a conflation model that is able to truly understand the semantic meaning of the strings and match them at the semantic level. To this end, we develop a character-level deep conflation model that encodes the input text strings from character level into finite dimension feature vectors, which are then used to compute the cosine similarity between the text strings. The model is trained in an end-to-end manner using back propagation and stochastic gradient descent to maximize the likelihood of the correct association. Specifically, we propose two variants of the deep conflation model, based on long-short-term memory (LSTM) recurrent neural network (RNN) and convolutional neural network (CNN), respectively. Both models perform well on a real-world business analytics dataset and significantly outperform the baseline bag-of-character (BoC) model.
We present the results of numerical calculations of magnetizability ($χ$) of the relativistic one-electron atoms with a pointlike, spinless and motionless nuclei of charge $Ze$. Exploiting the analytical formula for $χ$ recently derived by us [P. Stefa{ń}ska, 2015], valid for an arbitrary discrete energy eigenstate, we have found the values of the magnetizability for the ground state and for the first and the second set of excited states (i.e.: $2s_{1/2}$, $2p_{1/2}$, $2p_{3/2}$, $3s_{1/2}$, $3p_{1/2}$, $3p_{3/2}$, $3d_{3/2}$, and $3d_{5/2}$) of the Dirac one-electron atom. The results for ions with the atomic number $1 \leqslant Z \leqslant 137$ are given in 14 tables. The comparison of the numerical values of magnetizabilities for the ground state and for each states belonging to the first set of excited states of selected hydrogenlike ions, obtained with the use of two different values of the fine-structure constant, i.e.: $α^{-1}=137.035 999 139$ (CODATA 2014) and $α^{-1}=137.035 999 074$ (CODATA 2010), is also presented.
Turing progressions have been often used to measure the proof-theoretic strength of mathematical theories. Turing progressions based on $n$-provability give rise to a $Π_{n+1}$ proof-theoretic ordinal. As such, to each theory $U$ we can assign the sequence of corresponding $Π_{n+1}$ ordinals $\langle |U|_n\rangle_{n>0}$. We call this sequence a \emph{Turing-Taylor expansion} of a theory. In this paper, we relate Turing-Taylor expansions of sub-theories of Peano Arithmetic to Ignatiev's universal model for the closed fragment of the polymodal provability logic ${\mathbf{GLP}}_ω$. In particular, in this first draft we observe that each point in the Ignatiev model can be seen as Turing-Taylor expansions of formal mathematical theories. Moreover, each sub-theory of Peano Arithmetic that allows for a Turing-Taylor expression will define a unique point in Ignatiev's model.
In [12], we show that 3 of the 14 hypergeometric monodromy groups associated to Calabi-Yau threefolds, are arithmetic. Brav-Thomas (in [3]) show that 7 of the remaining 11, are thin. In this article, we settle the arithmeticity problem for the 14 monodromy groups, by showing that, the remaining 4 are arithmetic.
In this article, we generalize several fundamental results for arithmetic divisors, such as the continuity of the volume function, the generalized Hodge index theorem, Fujita's approximation theorem for arithmetic divisors and Zariski decompositions for arithmetic divisors on arithmetic surfaces, to the case of the adelic arithmetic divisors.
De Europese statistische regels voor overheden, van belang voor onder andere het bepalen van tekort en schuld volgens de EMU-normen, en internationale jaarrekeningregels van overheden, hebben vele overeenkomsten: het zijn beide baten-lastenstelsels en ze brengen verslag uit over dezelfde transacties en gebeurtenissen. Ze hebben echter verschillende doelstellingen waardoor sommige transacties en gebeurtenissen anders worden verwerkt. In dit artikel analyseren we de overeenkomsten en verschillen tussen de statistische grondslagen van het Europese Systeem van Nationale en Regionale Rekeningen (ESR) en de jaarrekeningregels van de International Public Sector Accounting Standards (IPSAS) Board. Met deze analyse beogen we een bijdrage te leveren aan het begrip van de financiële informatie die overheden openbaar maken, hetgeen van groot belang is in deze tijd van schuldencrisis. Daarbij komen we tot de conclusie dat de jaarrekeningen van overheden opgesteld volgens algemeen geaccepteerde verslaggevingsregels en voorzien van een verklaring van een rekenkamer of accountant een belangrijke bijdrage kunnen leveren aan de totstandkoming van de statistieken over de overheidsfinanciën, en daarmee aan de monitoring van de financiële positie van de Europese lidstaten door de Europese Commissie.
Business, Business mathematics. Commercial arithmetic. Including tables, etc.