Climate change has intensified the urgency of effective carbon sink solutions, yet the integration of Information and Communications Technologies (ICT) in these systems remains fragmented despite its transformative potential. This paper provides a comprehensive analysis of ICT applications in carbon sink projects from both economic and engineering perspectives, a dual lens approach rarely explored in the existing literature. In carbon trading, blockchain has improved transaction speed by 40%, while AI-based optimizations have reduced operational costs by 15% in projects such as Petra Nova.Through systematic examination, we identify three key findings: (1) ICT transforms carbon economics through digital financing platforms and blockchain-based trading systems, with AI enhancing price prediction, though data interoperability remains challenging; (2) digital technologies advance both natural and artificial sequestration from forest monitoring to Carbon Capture, Use and Storage (CCUS) optimization, yet lack integrated real-time control solutions; (3) realizing ICT's full potential requires addressing its environmental costs, strengthening policy support, and fostering interdisciplinary collaboration. By bridging the economic engineering divide and mapping current applications alongside future opportunities, this paper demonstrates that deeper integration of digital technologies is essential to scale carbon sink solutions to meet climate targets.
This paper provides a brief overview of the ongoing financial revolution, which extends beyond the emergence of cryptocurrencies as a digital medium of exchange. At its core, this revolution is driven by a paradigm shift rooted in the technological advancements of blockchain and the foundational principles of Islamic economics. Together, these elements offer a transformative framework that challenges traditional financial systems, emphasizing transparency, equity, and decentralized governance. The paper highlights the implications of this shift and its potential to reshape the global economic landscape.
We develop evaluation methods for measuring the economic decision-making capabilities and tendencies of LLMs. First, we develop benchmarks derived from key problems in economics -- procurement, scheduling, and pricing -- that test an LLM's ability to learn from the environment in context. Second, we develop the framework of litmus tests, evaluations that quantify an LLM's choice behavior on a stylized decision-making task with multiple conflicting objectives. Each litmus test outputs a litmus score, which quantifies an LLM's tradeoff response, a reliability score, which measures the coherence of an LLM's choice behavior, and a competency score, which measures an LLM's capability at the same task when the conflicting objectives are replaced by a single, well-specified objective. Evaluating a broad array of frontier LLMs, we (1) investigate changes in LLM capabilities and tendencies over time, (2) derive economically meaningful insights from the LLMs' choice behavior and chain-of-thought, (3) validate our litmus test framework by testing self-consistency, robustness, and generalizability. Overall, this work provides a foundation for evaluating LLM agents as they are further integrated into economic decision-making.
The long term estimation of the Marxist average rate of profit does not adhere to a theoretically grounded standard regarding which economic activities should or should not be included for such purposes, which is relevant because methodological non uniformity can be a significant source of overestimation or underestimation, generating a less accurate reflection of the capital accumulation dynamics. This research aims to provide a standard Marxist decision criterion regarding the inclusion and exclusion of economic activities for the calculation of the Marxist average profit rate for the case of United States economic sectors from 1960 to 2020, based on the Marxist definition of productive labor, its location in the circuit of capital, and its relationship with the production of surplus value. Using wavelet transformed Daubechies filters with increased symmetry, empirical mode decomposition, Hodrick Prescott filter embedded in unobserved components model, and a wide variety of unit root tests the internal theoretical consistency of the presented criteria is evaluated. Also, the objective consistency of the theory is evaluated by a dynamic factor autoregressive model, Principal Component Analysis via Singular Value Decomposition, and regularized Horseshoe regression. The results are consistent both theoretically and econometrically with the logic of Classical Marxist political economy.
Economic unions are international agreements oriented to increase economic efficiency and establishing political and cultural ties between the member countries. Becoming a member of an existing union usually requires the approval of both the candidate and members, while leaving it may require only the unilateral will of the exiting country. There are many examples of accession of states to previously consolidated economic unions, and a recent example of leaving is the withdrawal of the United Kingdom from the European Union. Motivated by the Brexit process, in this paper we propose an agent-based model to study the determinant factors driving withdrawals from an economic union. We show that both Union and local taxes promote the exits, whereas customs fees out of the Union boost cohesion. Furthermore, heterogeneity in both business conditions and wealth distribution promotes withdrawals, while countries' size diversity does not have a significant effect on them. We also deep into the individual causes that lead to dissatisfaction and, ultimately, to exits. We found that, for low Union taxes, the wealth inequality within the country is the leading cause of anti-Union opinion spreading. Conversely, for high Union taxes, the country's performance turns out to be the main driving force, resulting in a risk of wealthier countries leaving the Union. These findings will be helpful for the design of economic policies and effective informative campaigns.
This exercise proposes a learning mechanism to model economic agent's decision-making process using an actor-critic structure in the literature of artificial intelligence. It is motivated by the psychology literature of learning through reinforcing good or bad decisions. In a model of an environment, to learn to make decisions, this AI agent needs to interact with its environment and make explorative actions. Each action in a given state brings a reward signal to the agent. These interactive experience is saved in the agent's memory, which is then used to update its subjective belief of the world. The agent's decision-making strategy is formed and adjusted based on this evolving subjective belief. This agent does not only take an action that it knows would bring a high reward, it also explores other possibilities. This is the process of taking explorative actions, and it ensures that the agent notices changes in its environment and adapt its subjective belief and decisions accordingly. Through a model of stochastic optimal growth, I illustrate that the economic agent under this proposed learning structure is adaptive to changes in an underlying stochastic process of the economy. AI agents can differ in their levels of exploration, which leads to different experience in the same environment. This reflects on to their different learning behaviours and welfare obtained. The chosen economic structure possesses the fundamental decision making problems of macroeconomic models, i.e., how to make consumption-saving decisions in a lifetime, and it can be generalised to other decision-making processes and economic models.
Nicolò Cesa-Bianchi, Tommaso Cesari, Roberto Colomboni
et al.
Bilateral trade, a fundamental topic in economics, models the problem of intermediating between two strategic agents, a seller and a buyer, willing to trade a good for which they hold private valuations. Despite the simplicity of this problem, a classical result by Myerson and Satterthwaite (1983) affirms the impossibility of designing a mechanism which is simultaneously efficient, incentive compatible, individually rational, and budget balanced. This impossibility result fostered an intense investigation of meaningful trade-offs between these desired properties. Much work has focused on approximately efficient fixed-price mechanisms, i.e., Blumrosen and Dobzinski (2014; 2016), Colini-Baldeschi et al. (2016), which have been shown to fully characterize strong budget balanced and ex-post individually rational direct revelation mechanisms. All these results, however, either assume some knowledge on the priors of the seller/buyer valuations, or a black box access to some samples of the distributions, as in D{ü}tting et al. (2021). In this paper, we cast for the first time the bilateral trade problem in a regret minimization framework over rounds of seller/buyer interactions, with no prior knowledge on the private seller/buyer valuations. Our main contribution is a complete characterization of the regret regimes for fixed-price mechanisms with different models of feedback and private valuations, using as benchmark the best fixed price in hindsight. More precisely, we prove the following bounds on the regret: $\bullet$ $\widetildeΘ(\sqrt{T})$ for full-feedback (i.e., direct revelation mechanisms); $\bullet$ $\widetildeΘ(T^{2/3})$ for realistic feedback (i.e., posted-price mechanisms) and independent seller/buyer valuations with bounded densities; $\bullet$ $Θ(T)$ for realistic feedback and seller/buyer valuations with bounded densities; $\bullet$ $Θ(T)$ for realistic feedback and independent seller/buyer valuations; $\bullet$ $Θ(T)$ for the adversarial setting.
The lack of interpretability and transparency are preventing economists from using advanced tools like neural networks in their empirical research. In this paper, we propose a class of interpretable neural network models that can achieve both high prediction accuracy and interpretability. The model can be written as a simple function of a regularized number of interpretable features, which are outcomes of interpretable functions encoded in the neural network. Researchers can design different forms of interpretable functions based on the nature of their tasks. In particular, we encode a class of interpretable functions named persistent change filters in the neural network to study time series cross-sectional data. We apply the model to predicting individual's monthly employment status using high-dimensional administrative data. We achieve an accuracy of 94.5% in the test set, which is comparable to the best performed conventional machine learning methods. Furthermore, the interpretability of the model allows us to understand the mechanism that underlies the prediction: an individual's employment status is closely related to whether she pays different types of insurances. Our work is a useful step towards overcoming the black-box problem of neural networks, and provide a new tool for economists to study administrative and proprietary big data.
Estimating linear regression using least squares and reporting robust standard errors is very common in financial economics, and indeed, much of the social sciences and elsewhere. For thick tailed predictors under heteroskedasticity this recipe for inference performs poorly, sometimes dramatically so. Here, we develop an alternative approach which delivers an unbiased, consistent and asymptotically normal estimator so long as the means of the outcome and predictors are finite. The new method has standard errors under heteroskedasticity which are easy to reliably estimate and tests which are close to their nominal size. The procedure works well in simulations and in an empirical exercise. An extension is given to quantile regression.
Health economic evaluations face the issues of non-compliance and missing data. Here, non-compliance is defined as non-adherence to a specific treatment, and occurs within randomised controlled trials (RCTs) when participants depart from their random assignment. Missing data arises if, for example, there is loss to follow-up, survey non-response, or the information available from routine data sources is incomplete. Appropriate statistical methods for handling non-compliance and missing data have been developed, but they have rarely been applied in health economics studies. Here, we illustrate the issues and outline some of the appropriate methods to handle these with an application to a health economic evaluation that uses data from an RCT. In an RCT the random assignment can be used as an instrument for treatment receipt, to obtain consistent estimates of the complier average causal effect, provided the underlying assumptions are met. Instrumental variable methods can accommodate essential features of the health economic context such as the correlation between individuals' costs and outcomes in cost-effectiveness studies. Methodological guidance for handling missing data encourages approaches such as multiple imputation or inverse probability weighting, that assume the data are Missing At Random, but also sensitivity analyses that recognise the data may be missing according to the true, unobserved values, that is, Missing Not at Random. Future studies should subject the assumptions behind methods for handling non-compliance and missing data to thorough sensitivity analyses. Modern machine learning methods can help reduce reliance on correct model specification. Further research is required to develop flexible methods for handling more complex forms of non-compliance and missing data.
In this paper, we propose networked microgrids to facilitate the integration of variable renewable generation and improve the economics and resiliency of electricity supply in microgrids. A new concept, probability of successful islanding (PSI) is used to quantify the islanding capability of a microgrid considering the uncertainty of renewable energy resources and load as well as exchanged power at PCC. With the goal of minimizing the total operating cost while preserving user specified PSI, a chance-constrained optimization problem is formulated for the optimal scheduling of both individual microgrid and networked microgrids. Numerical simulation results show significant saving in electricity cost can be achieved by proposed networked microgrids without compromising the resiliency. The impact of correlation coefficients among the renewable generation and load of adjacent microgrids has been studied as well.
Asteroid mining has been proposed as an approach to complement Earth-based supplies of rare earth metals and supplying resources in space, such as water. However, existing studies on the economic viability of asteroid mining have remained rather simplistic and do not provide much guidance on which technological improvements would be needed for increasing its economic viability. This paper develops a techno-economic analysis of asteroid mining with the objective of providing recommendations for future technology development and performance improvements. Both, in-space resource provision such as water and return of platinum to Earth are considered. Starting from first principles of techno-economic analysis, gradually additional economic and technological factors are added to the analysis model. Applied to mining missions involving spacecraft reuse, learning curve effect, and multiple spacecraft, their economic viability is assessed. A sensitivity analysis with respect to throughput rate, spacecraft mass, and resource price is performed. Furthermore, a sample asteroid volatile mining architecture based on small CubeSat-class spacecraft is presented. It is concluded that key technological drivers for asteroid mining missions are throughput rate, number of spacecraft per mission, and the rate in which successive missions are conducted.
In this paper we introduce the concept of split Nash equilibrium problems associated with two related noncooperative strategic games. Then we apply the Fan-KKM theorem to prove the existence of solutions to split Nash equilibrium problems of related noncooperative strategic games, in which the strategy sets of the players are nonempty closed and convex subsets in Banach spaces. As application of this existence to economics, an example is provided to study the existence of split Nash equilibrium of utilities of two related economies. As applications, we study the existence of split Nash equilibrium in the dual extended Bertrant duopoly model of price competition
In this paper, we pay our attention to geometric parameters and their applications in economics and finance. We discuss the multiplicative models in which a geometric mean and a geometric standard deviation are more natural than arithmetic ones. We give two examples from Warsaw Stock Exchange in 1995--2009 and from a bid of 52-week treasury bills in 1992--2009 in Poland as an illustrative example. For distributions having applications in finance and insurance we give their multiplicative parameters as well as their estimations. We consider, among others, heavy-tailed distributions such as lognormal and Pareto distribution, applied to modelling of large losses.
Dion Harmon, Marcus A. M. de Aguiar, David D. Chinellato
et al.
Predicting panic is of critical importance in many areas of human and animal behavior, notably in the context of economics. The recent financial crisis is a case in point. Panic may be due to a specific external threat, or self-generated nervousness. Here we show that the recent economic crisis and earlier large single-day panics were preceded by extended periods of high levels of market mimicry --- direct evidence of uncertainty and nervousness, and of the comparatively weak influence of external news. High levels of mimicry can be a quite general indicator of the potential for self-organized crises.
The dynamic network of relationships among corporations underlies cascading economic failures including the current economic crisis, and can be inferred from correlations in market value fluctuations. We analyze the time dependence of the network of correlations to reveal the changing relationships among the financial, technology, and basic materials sectors with rising and falling markets and resource constraints. The financial sector links otherwise weakly coupled economic sectors, particularly during economic declines. Such links increase economic risk and the extent of cascading failures. Our results suggest that firewalls between financial services for different sectors would reduce systemic risk without hampering economic growth.
Although classical economic theory is based on the concept of stable equilibrium, real economic systems appear to be always out of equilibrium. Indeed, they share many of the dynamical features of other complex systems, e.g., ecological food-webs. We focus on the relation between increasing complexity of the economic network and its stability with respect to small perturbations in the dynamical variables associated with the constituent nodes. Inherent delays and multiple time-scales suggest that economic systems will be more likely to exhibit instabilities as their complexity is increased even though the speed at which transactions are conducted has increased many-fold through technological developments. Analogous to the birth of nonlinear dynamics from Poincare's work on the question of whether the solar system is stable, we suggest that similar theoretical developments may arise from efforts by econophysicists to understand the mechanisms by which instabilities arise in the economy.