Béal et al. (Int J Game Theory 54, 2025) introduce the Diversity Owen value for TU-games with diversity constraints, and provide axiomatic characterizations using the axioms of fairness and balanced contributions. However, there exist logical flaws in the proofs of the uniqueness of these characterizations. In this note we provide the corrected proofs of the characterizations by introducing the null player for diverse games axiom. Also, we establish an alternative characterization of the Diversity Owen value by modifying the axioms of the above characterizations.
The main purpose of this paper is to generalize some recent results obtained by Chilarescu and Manuel Gomez. Essentially, we are trying to study the effect of elasticity of substitution on the parameters of economic growth, based on its two possible values - lower and higher than one. We show that a higher elasticity of substitution increases per capita income, the relative share of physical capital, the common growth rate and the share of human capital allocated to the production sector, and this property is not affected by the position of the elasticity of substitution - below or above one.
This paper studies a game in which an informed sender with state-independent preferences uses verifiable messages to convince a receiver to choose an action from a finite set. We characterize the equilibrium outcomes of the game and compare them with commitment outcomes in information design. We provide conditions under which a commitment outcome is an equilibrium outcome and identify environments in which the sender does not benefit from commitment power. Our findings offer insights into the interchangeability of verifiability and commitment in applied settings.
Lotteries are commonly employed in school choice to fairly resolve priority ties; however, current practices typically keep students uninformed about their lottery outcomes at the time of preference submission. This paper advocates for revealing lottery information to students beforehand. When preference lists are constrained in length, which is a common feature in real-world systems, such disclosure reduces uncertainty and enables students to make more informed decisions. We demonstrate the benefits of lottery revelation through two stylized models. Theoretical predictions are supported by laboratory experiments.
The standard criterion of rationality in economics is the maximization of a utility function that is stable across multiple observations of an agent's choice behavior. In this paper, we discuss two notions of the money pump that characterize two corresponding notions of utility-maximization. We explain the senses in which the amount of money that can be pumped from a consumer is a useful measure of the consumer's departure from utility-maximization.
Addiction is a major societal issue leading to billions in healthcare losses per year. Policy makers often introduce ad hoc quantity limits-limits on the consumption or possession of a substance-something which current economic models of addiction have failed to address. This paper enriches Bernheim and Rangel (2004)'s model of addiction driven by cue-triggered decisions by incorporating endogenous choice of how much of the addictive good to consume, instead of just whether or not consumption happens. Stricter quality limits improve welfare as long as they do not preclude the myopically optimal level of consumption.
We study law enforcement guided by data-informed predictions of "hot spots" for likely criminal offenses. Such "predictive" enforcement could lead to data being selectively and disproportionately collected from neighborhoods targeted for enforcement by the prediction. Predictive enforcement that fails to account for this endogenous "datafication" may lead to the over-policing of traditionally high-crime neighborhoods and performs poorly, in particular, in some cases as poorly as if no data were used. Endogenizing the incentives for criminal offenses identifies additional deterrence benefits from the informationally efficient use of data.
We study the tiered deferred acceptance mechanism used in school admissions, such as in China and Turkey. This mechanism partitions schools into tiers and applies the deferred acceptance algorithm within each tier. Once assigned, students cannot apply to schools in subsequent tiers. We show that this mechanism is not strategy-proof. In the induced preference revelation game, we find that merging tiers preserves all equilibrium outcomes, and within-tier acyclicity is necessary and sufficient for the mechanism to implement stable matchings. We also find that introducing tiers to the deferred acceptance mechanism may not improve student quality at top-tier schools as intended.
This paper presents weakened notions of corewise stability and setwise stability for matching markets where agents have substitutable choice functions. We introduce the concepts of worker-quasi-core, firm-quasi-core, and worker-quasisetwise stability. We also examine their relationship to established notions in the literature, such as worker-quasi and firm-quasi stability in both many-to-one and many-to-many markets.
For two actions in a decision problem, a and b, each of which produces a state-dependent monetary reward, we study how to robustly make action a more attractive. Action a' improves upon a in this manner if the set of beliefs at which a is preferred to b is a subset of the set of beliefs at which a' is preferred to b, irrespective of the risk-averse agent's utility function (in money). We provide a full characterization of this relation and discuss applications in politics, bilateral trade, insurance, and information acquisition.
We model stochastic choices with categorization. The agent preliminarly groups alternatives in homogenous disjoint classes, then randomly chooses one class and randomly picks an item within the selected class. We give a formal definition of a choice generated by this procedure, and provide an axiomatic characterization. The characterizing properties allow an external analyst to elicit that categorization is applied. In a broader interpretation, the model allows to describe the observed choice as the composition of independent subchoices. This composition preserves rationalizability by Random Utility Maximization. A generalization of the model subsumes Luce model and Nested Logit.
We consider the problem of extending an acyclic binary relation that is invariant under a given family of transformations into an invariant preference. We show that when a family of transformations is commutative, every acyclic invariant binary relation extends. We find that, in general, the set of extensions agree on the ranking of many pairs that (i) are unranked by the original relation, and (ii) cannot be ranked by invariance or transitivity considerations alone. We interpret these additional implications as the out-of-sample predictions generated by invariance, and study their structure.
We propose a finite automaton-style solution concept for supergames. In our model, we define an equilibrium to be a cycle of state switches and a supergame to be an infinite walk on states of a finite stage game. We show that if the stage game is locally non-cooperative, and the utility function is monotonously decreasing as the number of defective agents increases, the symmetric multiagent prisoners' dilemma supergame must contain one symmetric equilibrium and can contain asymmetric equilibria.
We introduce the "local-global" approach for a divisible portfolio and perform an equilibrium analysis for two variants of core-selecting auctions. Our main novelty is extending the Nearest-VCG pricing rule in a dynamic two-round setup, mitigating bidders' free-riding incentives and further reducing the sellers' costs. The two-round setup admits an information-revelation mechanism that may offset the "winner's curse", and it is in accord with the existing iterative procedure of combinatorial auctions. With portfolio trading becoming an increasingly important part of investment strategies, our mechanism contributes to increasing interest in portfolio auction protocols.
We study the implementation of fixed priority top trading cycles (FPTTC) rules via simply dominant mechanisms (Pycia and Troyan, 2019) in the context of assignment problems, where agents are to be assigned at most one indivisible object and monetary transfers are not allowed. We consider both models - with and without outside options, and characterize all simply dominant FPTTC rules in both models. We further introduce the notion of simple strategy-proofness to resolve the issue with agents being concerned about having time-inconsistent preferences, and discuss its relation with simple dominance.
In discrete matching markets, substitutes and complements can be unidirectional between two groups of workers when members of one group are more important or competent than those of the other group for firms. We show that a stable matching exists and can be found by a two-stage Deferred Acceptance mechanism when firms' preferences satisfy a unidirectional substitutes and complements condition. This result applies to both firm-worker matching and controlled school choice. Under the framework of matching with continuous monetary transfers and quasi-linear utilities, we show that substitutes and complements are bidirectional for a pair of workers.
Data-based decisionmaking must account for the manipulation of data by agents who are aware of how decisions are being made and want to affect their allocations. We study a framework in which, due to such manipulation, data becomes less informative when decisions depend more strongly on data. We formalize why and how a decisionmaker should commit to underutilizing data. Doing so attenuates information loss and thereby improves allocation accuracy.
Christopher P. Chambers, Federico Echenique, Nicolas Lambert
We study preferences estimated from finite choice experiments and provide sufficient conditions for convergence to a unique underlying "true" preference. Our conditions are weak, and therefore valid in a wide range of economic environments. We develop applications to expected utility theory, choice over consumption bundles, menu choice and intertemporal consumption. Our framework unifies the revealed preference tradition with models that allow for errors.
Under some initial conditions, it is shown that time consistency requirements prevent rational expectation equilibrium (REE) existence for dynamic stochastic general equilibrium models induced by consumer heterogeneity, in contrast to static models. However, one can consider REE-prohibiting initial conditions as limits of other initial conditions. The REE existence issue then is overcome by using a limit of economies. This shows that significant care must be taken of when dealing with rational expectation equilibria.