How much macroeconomic information is contained in a single input-output table? We feed FIGARO 64-sector symmetric tables into DEPLOYERS, a Darwinian agent-based simulator, producing genuine out-of-sample GDP forecasts. For each year, the model reads one FIGARO table for year N, self-organizes an artificial economy through evolutionary natural selection, then runs 12 months of autonomous free-market dynamics whose emergent growth rate predicts year N+1. The I-O table is the only input: no time series, no estimated parameters, no expectations formation, no external forecasts. We present five results. First, a 9-year Austrian panel (2010-2018) using 12-seed ensembles produces MAE of 1.22 pp overall; for five non-crisis years, MAE falls to 0.42 pp -- comparable to the best professional forecaster (WIFO: 0.48 pp). A Swedish 9-year panel independently confirms this accuracy (normal-years MAE 0.80 pp). Second, cross-country portability is demonstrated across 33 of 37 tested FIGARO countries with zero parameter changes. Third, a German 9-year panel reveals systematic +3.7 pp positive bias from export dependency -- an informative negative result pointing to multi-country network simulation as the natural extension. Fourth, a COVID-19 simulation demonstrates the I-O structure as a shock propagation mechanism: a 19-month timeline produces Year 1 GDP -4.62% vs empirical -6.6%. Fifth, emergent firm size distributions match European Commission data without micro-target calibration. These results establish the I-O table as serving a dual purpose: structural baseline engine and dynamic shock propagation mechanism. Since FIGARO covers 46 countries, the approach is immediately portable without retuning parameters.
Query-focused table summarization requires complex reasoning, often approached through step-by-step natural language (NL) plans. However, NL plans are inherently ambiguous and lack structure, limiting their conversion into executable programs like SQL and hindering scalability, especially for multi-table tasks. To address this, we propose a paradigm shift to structured representations. We introduce a new structured plan, TaSoF, inspired by formalism in traditional multi-agent systems, and a framework, SPaGe, that formalizes the reasoning process in three phases: 1) Structured Planning to generate TaSoF from a query, 2) Graph-based Execution to convert plan steps into SQL and model dependencies via a directed cyclic graph for parallel execution, and 3) Summary Generation to produce query-focused summaries. Our method explicitly captures complex dependencies and improves reliability. Experiments on three public benchmarks show that SPaGe consistently outperforms prior models in both single- and multi-table settings, demonstrating the advantages of structured representations for robust and scalable summarization.
Business cycles (a periodic change of e.g. GDP over five to ten years) exist, but a proper explanation for it is still lacking. Here we extend the well-known NAIRU (non-accelerating inflation rate of unemployment) model, resulting in a set of differ-ential equations. However, the solution is marginal stable. Therefore we find a nat-ural sinusoidal oscillation of inflation and unemployment just as observed in busi-ness cycles. When speculation is present, the instability becomes more severe. So we present for the first time a mathematical explanation for business cycles. The steering of central banks by setting interest rates to keep inflation stable and low needs an overhaul. One has to distinguish between real monetary instability and the one caused naturally by business cycles.
Age-specific life-table death counts observed over time are examples of densities. Non-negativity and summability are constraints that sometimes require modifications of standard linear statistical methods. The centered log-ratio transformation presents a mapping from a constrained to a less constrained space. With a time series of densities, forecasts are more relevant to the recent data than the data from the distant past. We introduce a weighted compositional functional data analysis for modeling and forecasting life-table death counts. Our extension assigns higher weights to more recent data and provides a modeling scheme easily adapted for constraints. We illustrate our method using age-specific Swedish life-table death counts from 1751 to 2020. Compared to their unweighted counterparts, the weighted compositional data analytic method improves short-term point and interval forecast accuracies. The improved forecast accuracy could help actuaries improve the pricing of annuities and setting of reserves.
In the process of evolution, the brain has achieved such perfection that artificial intelligence systems do not have and which needs its own mathematics. The concept of cognitome, introduced by the academician K.V. Anokhin, as the cognitive structure of the mind -- a high-order structure of the brain and a neural hypernetwork, is considered as the basis for modeling. Consciousness then is a special form of dynamics in this hypernetwork -- a large-scale integration of its cognitive elements. The cognitome, in turn, consists of interconnected COGs (cognitive groups of neurons) of two types -- functional systems and cellular ensembles. K.V. Anokhin sees the task of the fundamental theory of the brain and mind in describing these structures, their origin, functions and processes in them. The paper presents mathematical models of these structures based on new mathematical results, as well as models of different cognitive processes in terms of these models. In addition, it is shown that these models can be derived based on a fairly general principle of the brain works: \textit{the brain discovers all possible causal relationships in the external world and draws all possible conclusions from them}. Based on these results, the paper presents models of: ``natural" classification; theory of functional brain systems by P.K. Anokhin; prototypical theory of categorization by E. Roche; theory of causal models by Bob Rehter; theory of consciousness as integrated information by G. Tononi.
These are the lecture notes that accompanied the course of the same name that I taught at the Eindhoven University of Technology from 2021 to 2023. The course is intended as an introduction to neural networks for mathematics students at the graduate level and aims to make mathematics students interested in further researching neural networks. It consists of two parts: first a general introduction to deep learning that focuses on introducing the field in a formal mathematical way. The second part provides an introduction to the theory of Lie groups and homogeneous spaces and how it can be applied to design neural networks with desirable geometric equivariances. The lecture notes were made to be as self-contained as possible so as to accessible for any student with a moderate mathematics background. The course also included coding tutorials and assignments in the form of a set of Jupyter notebooks that are publicly available at https://gitlab.com/bsmetsjr/mathematics_of_neural_networks.
Michael Glass, Sugato Bagchi, Oktie Hassanzadeh
et al.
Increasing amounts of structured data can provide value for research and business if the relevant data can be located. Often the data is in a data lake without a consistent schema, making locating useful data challenging. Table search is a growing research area, but existing benchmarks have been limited to displayed tables. Tables sized and formatted for display in a Wikipedia page or ArXiv paper are considerably different from data tables in both scale and style. By using metadata associated with open data from government portals, we create the first dataset to benchmark search over data tables at scale. We demonstrate three styles of table-to-table related table search. The three notions of table relatedness are: tables produced by the same organization, tables distributed as part of the same dataset, and tables with a high degree of overlap in the annotated tags. The keyword tags provided with the metadata also permit the automatic creation of a keyword search over tables benchmark. We provide baselines on this dataset using existing methods including traditional and neural approaches.
This paper considers a practical truncated traveling salesman problem (TTSP), in which the salesman is only required to cover a subset of out of given cities (rather than covering all the given cities as in conventional travelling salesman problem (TSP)) with minimal traversal distance. Thus, every feasible solution tour contains exactly cities including the starting city. However, extensive research on TSP has been received and various efficient solution techniques including exact, heuristic, and metaheuristic algorithms are devoted, a very limited attention has been given to TTSP models because of its solution structure. The TTSP model comprises two types of problems including city selection i.e. as a salesman's trip need not include all the cities, the challenge is to identify which combination of cities are to be visited and which sequence of cities will constitute minimal traversal distance. A hybrid genetic algorithm (GA) comprising sophisticated mutation operators is developed to tackle this problem efficiently. Comparative computational findings suggest that the proposed GA has capability to outperform existing approaches in terms of TTSP results. In addition, the proposed GA report improved results and will serve as a basis for forthcoming TTSP studies.
Analysis, Business mathematics. Commercial arithmetic. Including tables, etc.
A major challenge in fine-tuning deep learning models for automatic summarization is the need for large domain specific datasets. One of the barriers to curating such data from resources like online publications is navigating the license regulations applicable to their re-use, especially for commercial purposes. As a result, despite the availability of several business journals there are no large scale datasets for summarizing business documents. In this work, we introduce Open4Business(O4B),a dataset of 17,458 open access business articles and their reference summaries. The dataset introduces a new challenge for summarization in the business domain, requiring highly abstractive and more concise summaries as compared to other existing datasets. Additionally, we evaluate existing models on it and consequently show that models trained on O4B and a 7x larger non-open access dataset achieve comparable performance on summarization. We release the dataset, along with the code which can be leveraged to similarly gather data for multiple domains.
Saeed Nosratabadi, Gergo Pinter, Amir Mosavi
et al.
Sustainable business models also offer banks competitive advantages such as increasing brand reputation and cost reduction. However, no framework is presented to evaluate the sustainability of banking business models. To bridge this theoretical gap, the current study using A Delphi-Analytic Hierarchy Process method, firstly, developed a sustainable business model to evaluate the sustainability of the business model of banks. In the second step, the sustainability performance of sixteen banks from eight European countries including Norway, the UK, Poland, Hungary, Germany, France, Spain, and Italy, assessed. The proposed business model components of this study were ranked in terms of their impact on achieving sustainability goals. Consequently, the proposed model components of this study, based on their impact on sustainability, are respectively value proposition, core competencies, financial aspects, business processes, target customers, resources, technology, customer interface, and partner network. The results of the comparison of the banks studied by each country disclosed that the sustainability of the Norwegian and German banks business models is higher than in other counties. The studied banks of Hungary and Spain came in second, the banks of the UK, Poland, and France ranked third, and finally, the Italian banks ranked fourth in the sustainability of their business models.
Richard Brent, Carl Pomerance, David Purdum
et al.
Let $M(n)$ denote the number of distinct entries in the $n \times n$ multiplication table. The function $M(n)$ has been studied by Erdős, Tenenbaum, Ford, and others, but the asymptotic behaviour of $M(n)$ as $n \to \infty$ is not known precisely. Thus, there is some interest in algorithms for computing $M(n)$ either exactly or approximately. We compare several algorithms for computing $M(n)$ exactly, and give a new algorithm that has a subquadratic running time. We also present two Monte Carlo algorithms for approximate computation of $M(n)$. We give the results of exact computations for values of $n$ up to $2^{30}$, and of Monte Carlo computations for $n$ up to $2^{100,000,000}$, and compare our experimental results with Ford's order-of-magnitude result.
Eugene Wigner famously argued for the "unreasonable effectiveness of mathematics" for describing physics and other natural sciences in his 1960 essay. That essay has now led to some 55 years of (sometimes anguished) soul searching --- responses range from "So what? Why do you think we developed mathematics in the first place?", through to extremely speculative ruminations on the existence of the universe (multiverse) as a purely mathematical entity --- the Mathematical Universe Hypothesis. In the current essay I will steer an utterly prosaic middle course: Much of the mathematics we develop is informed by physics questions we are tying to solve; and those physics questions for which the most utilitarian mathematics has successfully been developed are typically those where the best physics progress has been made.
Problem 2 at the 56th International Mathematical Olympiad (2015) asks for all triples (a,b,c) of positive integers for which ab-c, bc-a, and ca-b are all powers of 2. We show that this problem requires only a primitive form of arithmetic, going back to the Pythagoreans, which is the arithmetic of the even and the odd.
Commercially available business process management systems (BPMS) still suffer to support organizations to enact their business processes in an effective and efficient way. Current BPMS, in general, are based on BPMN 2.0 and/or BPEL. It is well known, that these approaches have some restrictions according modeling and immediate transfer of the model into executable code. Recently, a method for modeling and execution of business processes, named subject-oriented business process management (S-BPM), gained attention. This methodology facilitates modeling of any business process using only five symbols and allows direct execution based on such models. Further on, this methodology has a strong theoretical and formal basis realizing distributed systems; any process is defined as a network of independent and distributed agents - i.e. instances of subjects - which coordinate work through the exchange of messages. In this work, we present a framework and a prototype based on off-the-shelf technologies as a possible realization of the S-BPM methodology. We can prove and demonstrate the principal architecture concept; these results should also stimulate a discussion about actual BPMS and its underlying concepts.
We pose thirty conjectures on arithmetical sequences, most of which are about monotonicity of sequences of the form $(\root n\of{a_n})_{n\ge 1}$ or the form $(\root{n+1}\of{a_{n+1}}/\root n\of{a_n})_{n\ge1}$, where $(a_n)_{n\ge 1}$ is a number-theoretic or combinatorial sequence of positive integers. This material might stimulate further research.
Cloud computing providers' and customers' services are not only exposed to existing security risks, but, due to multi-tenancy, outsourcing the application and data, and virtualization, they are exposed to the emergent, as well. Therefore, both the cloud providers and customers must establish information security system and trustworthiness each other, as well as end users. In this paper we analyze main international and industrial standards targeting information security and their conformity with cloud computing security challenges. We evaluate that almost all main cloud service providers (CSPs) are ISO 27001:2005 certified, at minimum. As a result, we propose an extension to the ISO 27001:2005 standard with new control objective about virtualization, to retain generic, regardless of company's type, size and nature, that is, to be applicable for cloud systems, as well, where virtualization is its baseline. We also define a quantitative metric and evaluate the importance factor of ISO 27001:2005 control objectives if customer services are hosted on-premise or in cloud. The conclusion is that obtaining the ISO 27001:2005 certificate (or if already obtained) will further improve CSP and CC information security systems, and introduce mutual trust in cloud services but will not cover all relevant issues. In this paper we also continue our efforts in business continuity detriments cloud computing produces, and propose some solutions that mitigate the risks.
Predicting the transition temperature, Tc, of a superconductor from Periodic Table normal state properties is regarded as one of the grand challenges of superconductivity. By studying the correlations of Periodic Table properties with known superconductors, it is possible to estimate their transition temperatures. Starting from the isotope effect and correlations of superconductivity with electronegativity (\Chi), valence electron count per atom (Ne), atomic number(Z) and formula weight (Fw), we derive an empirical formula for estimating Tc that includes an unknown parameter,(Ko). With average values of \Chi, Ne and Z, we develop a material specific characterization dataset (MSCD) model of a superconductor that is quantitatively useful for characterizing and comparing superconductors. We show that for most superconductors, Ko correlates with Fw/Z, Ne, Z, number of atoms (An) in the formula, number of elements (En) and with Tc. We study some superconductor families and use the discovered correlations to predict similar and novel superconductors and also estimate their Tcs. Thus the material specific equations derived in this paper, the material specific characterization dataset (MSCD) system developed here and the discovered correlation between Tc and Fw/Z, En and An, provide the building blocks for the analysis, design and search of potential novel high temperature superconductors with specific estimated Tcs.
Serge Autexier, Catalin David, Dominik Dietrich
et al.
Mathematical knowledge is a central component in science, engineering, and technology (documentation). Most of it is represented informally, and -- in contrast to published research mathematics -- subject to continual change. Unfortunately, machine support for change management has either been very coarse grained and thus barely useful, or restricted to formal languages, where automation is possible. In this paper, we report on an effort to extend change management to collections of semi-formal documents which flexibly intermix mathematical formulas and natural language and to integrate it into a semantic publishing system for mathematical knowledge. We validate the long-standing assumption that the semantic annotations in these flexiformal documents that drive the machine-supported interaction with documents can support semantic impact analyses at the same time. But in contrast to the fully formal setting, where adaptations of impacted documents can be automated to some degree, the flexiformal setting requires much more user interaction and thus a much tighter integration into document management workflows.