A large database of published model results is used to estimate the distribution of the social cost of carbon as a function of the underlying assumptions. The literature on the social cost of carbon deviates in its assumptions from the literatures on the impacts of climate change, discounting, and risk aversion. The proposed meta-emulator corrects this. The social cost of carbon is higher than reported in the literature.
Recently, an indicator for stock market fragility and crash size in terms of the Ollivier-Ricci curvature has been proposed. We study analytical and empirical properties of such indicator, test its elasticity with respect to different parameters and provide heuristics for the parameters involved. We show when and how the indicator accurately describes a financial crisis. We also propose an alternate method for calculating the indicator using a specific sub-graph with special curvature properties.
This paper outlines a Bayesian approach to estimate finite mixtures of Tobit models. The method consists of an MCMC approach that combines Gibbs sampling with data augmentation and is simple to implement. I show through simulations that the flexibility provided by this method is especially helpful when censoring is not negligible. In addition, I demonstrate the broad utility of this methodology with applications to a job training program, labor supply, and demand for medical care. I find that this approach allows for non-trivial additional flexibility that can alter results considerably and beyond improving model fit.
This note provides a conceptual clarification of Ronald Aylmer Fisher's (1935) pioneering exact test in the context of the Lady Testing Tea experiment. It unveils a critical implicit assumption in Fisher's calibration: the taster minimizes expected misclassification given fixed probabilistic information. Without similar assumptions or an explicit alternative hypothesis, the rationale behind Fisher's specification of the rejection region remains unclear.
How robust are analyses based on marginal treatment effects (MTE) to violations of Imbens and Angrist (1994) monotonicity? In this note, I present weaker forms of monotonicity under which popular MTE-based estimands still identify the parameters of interest.
This paper proposes three novel test procedures that yield valid inference in an environment with many weak instrumental variables (MWIV). It is observed that the t statistic of the jackknife instrumental variable estimator (JIVE) has an asymptotic distribution that is identical to the two-stage-least squares (TSLS) t statistic in the just-identified environment. Consequently, test procedures that were valid for TSLS t are also valid for the JIVE t. Two such procedures, i.e., VtF and conditional Wald, are adapted directly. By exploiting a feature of MWIV environments, a third, more powerful, one-sided VtF-based test procedure can be obtained.
We revisit conduct parameter estimation in homogeneous goods markets to resolve the conflict between Bresnahan (1982) and Perloff and Shen (2012) regarding the identification and the estimation of conduct parameters. We point out that Perloff and Shen's (2012) proof is incorrect and its simulation setting is invalid. Our simulation shows that estimation becomes accurate when demand shifters are properly added in supply estimation and sample sizes are increased, supporting Bresnahan (1982).
The Clustered Factor (CF) model induces a block structure on the correlation matrix and is commonly used to parameterize correlation matrices. Our results reveal that the CF model imposes superfluous restrictions on the correlation matrix. This can be avoided by a different parametrization, involving the logarithmic transformation of the block correlation matrix.
We offer retrospective and prospective assessments of the Diebold-Yilmaz connectedness research program, combined with personal recollections of its development. Its centerpiece in many respects is Diebold and Yilmaz (2014), around which our discussion is organized.
This paper provides partial identification results for the marginal treatment effect ($MTE$) when the binary treatment variable is potentially misreported and the instrumental variable is discrete. Identification results are derived under different sets of nonparametric assumptions. The identification results are illustrated in identifying the marginal treatment effects of food stamps on health.
We consider settings where an allocation has to be chosen repeatedly, returns are unknown but can be learned, and decisions are subject to constraints. Our model covers two-sided and one-sided matching, even with complex constraints. We propose an approach based on Thompson sampling. Our main result is a prior-independent finite-sample bound on the expected regret for this algorithm. Although the number of allocations grows exponentially in the number of participants, the bound does not depend on this number. We illustrate the performance of our algorithm using data on refugee resettlement in the United States.
We establish nonparametric identification in a class of so-called index models using a novel approach that relies on general topological results. Our proof strategy requires substantially weaker conditions on the functions and distributions characterizing the model compared to existing strategies; in particular, it does not require any large support conditions on the regressors of our model. We apply the general identification result to additive random utility and competing risk models.
We introduce tools for controlled variable selection to economists. In particular, we apply a recently introduced aggregation scheme for false discovery rate (FDR) control to German administrative data to determine the parts of the individual employment histories that are relevant for the career outcomes of women. Our results suggest that career outcomes can be predicted based on a small set of variables, such as daily earnings, wage increases in combination with a high level of education, employment status, and working experience.
Dyadic data, where outcomes reflecting pairwise interaction among sampled units are of primary interest, arise frequently in social science research. Regression analyses with such data feature prominently in many research literatures (e.g., gravity models of trade). The dependence structure associated with dyadic data raises special estimation and, especially, inference issues. This chapter reviews currently available methods for (parametric) dyadic regression analysis and presents guidelines for empirical researchers.
Many economic studies use shift-share instruments to estimate causal effects. Often, all shares need to fulfil an exclusion restriction, making the identifying assumption strict. This paper proposes to use methods that relax the exclusion restriction by selecting invalid shares. I apply the methods in two empirical examples: the effect of immigration on wages and of Chinese import exposure on employment. In the first application, the coefficient becomes lower and often changes sign, but this is reconcilable with arguments made in the literature. In the second application, the findings are mostly robust to the use of the new methods.
Matias D. Cattaneo, Rocio Titiunik, Gonzalo Vazquez-Bare
This handbook chapter gives an introduction to the sharp regression discontinuity design, covering identification, estimation, inference, and falsification methods.