This paper introduces a new method for testing the statistical significance of estimated parameters in predictive regressions. The approach features a new family of test statistics that are robust to the degree of persistence of the predictors. Importantly, the method accounts for serial correlation and conditional heteroskedasticity without requiring any corrections or adjustments. This is achieved through a mechanism embedded within the test statistics that effectively decouples serial dependence present in the data. The limiting null distributions of these test statistics are shown to follow a chi-square distribution, and their asymptotic power under local alternatives is derived. A comprehensive set of simulation experiments illustrates their finite sample size and power properties.
O segundo volume da coletânea Mundos do trabalho em Santa Catarina expressa a continuidade do esforço acadêmico sul catarinense em compreender e trazer à tona as relações trabalhistas de Santa Catarina em uma perspectiva crítica, servindo como um canal de resistência científica e, também, de classe. Não é um livro isento, não por que busca um posicionamento a priori, mas sim por que as experiências aqui compartilhadas atuam indissociavelmente na consciência dos acadêmicos e acadêmicas autores e, também, na consciência dos trabalhadores e trabalhadoras de suas respectivas pesquisas. Portanto, é também um livro político, que se posiciona contra as políticas neoliberais que precarizam as relações trabalhistas e marginalizam a classe trabalhadora.
Eric Auerbach, Jonathan Auerbach, Max Tabord-Meehan
We thank Savje (2023) for a thought-provoking article and appreciate the opportunity to share our perspective as social scientists. In his article, Savje recommends misspecified exposure effects as a way to avoid strong assumptions about interference when analyzing the results of an experiment. In this invited discussion, we highlight a limiation of Savje's recommendation: exposure effects are not generally useful for evaluating social policies without the strong assumptions that Savje seeks to avoid.
An updated and extended meta-analysis confirms that the central estimate of the social cost of carbon is around $200/tC with a large, right-skewed uncertainty and trending up. The pure rate of time preference and the inverse of the elasticity of intertemporal substitution are key assumptions, the total impact of 2.5K warming less so. The social cost of carbon is much higher if climate change is assumed to affect economic growth rather than the level of output and welfare. The literature is dominated by a relatively small network of authors, based in a few countries. Publication and citation bias have pushed the social cost of carbon up.
In this study, we introduced various statistical performance metrics, based on the pinball loss and the empirical coverage, for the ranking of probabilistic forecasting models. We tested the ability of the proposed metrics to determine the top performing forecasting model and investigated the use of which metric corresponds to the highest average per-trade profit in the out-of-sample period. Our findings show that for the considered trading strategy, ranking the forecasting models according to the coverage of quantile forecasts used in the trading hours exhibits a superior economic performance.
The control function approach allows the researcher to identify various causal effects of interest. While powerful, it requires a strong invertibility assumption in the selection process, which limits its applicability. This paper expands the scope of the nonparametric control function approach by allowing the control function to be set-valued and derive sharp bounds on structural parameters. The proposed generalization accommodates a wide range of selection processes involving discrete endogenous variables, random coefficients, treatment selections with interference, and dynamic treatment selections. The framework also applies to partially observed or identified controls that are directly motivated from economic models.
This paper considers hypothesis testing in semiparametric models which may be non-regular. I show that C($α$) style tests are locally regular under mild conditions, including in cases where locally regular estimators do not exist, such as models which are (semiparametrically) weakly identified. I characterise the appropriate limit experiment in which to study local (asymptotic) optimality of tests in the non-regular case and generalise classical power bounds to this case. I give conditions under which these power bounds are attained by the proposed C($α$) style tests. The application of the theory to a single index model and an instrumental variables model is worked out in detail.
This Appendix (dated: July 2021) includes supplementary derivations related to the main limit results of the econometric framework for structural break testing in predictive regression models based on the OLS-Wald and IVX-Wald test statistics, developed by Katsouris C (2021). In particular, we derive the asymptotic distributions of the test statistics when the predictive regression model includes either mildly integrated or persistent regressors. Moreover, we consider the case in which a model intercept is included in the model vis-a-vis the case that the predictive regression model has no model intercept. In a subsequent version of this study we reexamine these particular aspects in more depth with respect to the demeaned versions of the variables of the predictive regression.
This paper presents a new perspective on the identification at infinity for the intercept of the sample selection model as identification at the boundary via a transformation of the selection index. This perspective suggests generalizations of estimation at infinity to kernel regression estimation at the boundary and further to local linear estimation at the boundary. The proposed kernel-type estimators with an estimated transformation are proven to be nonparametric-rate consistent and asymptotically normal under mild regularity conditions. A fully data-driven method of selecting the optimal bandwidths for the estimators is developed. The Monte Carlo simulation shows the desirable finite sample properties of the proposed estimators and bandwidth selection procedures.
This paper develops a novel nonparametric identification method for treatment effects in settings where individuals self-select into treatment sequences. I propose an identification strategy which relies on a dynamic version of standard Instrumental Variables (IV) assumptions and builds on a dynamic version of the Marginal Treatment Effects (MTE) as the fundamental building block for treatment effects. The main contribution of the paper is to relax assumptions on the support of the observed variables and on unobservable gains of treatment that are present in the dynamic treatment effects literature. Monte Carlo simulation studies illustrate the desirable finite-sample performance of a sieve estimator for MTEs and Average Treatment Effects (ATEs) on a close-to-application simulation study.
In this survey we discuss the recent causal panel data literature. This recent literature has focused on credibly estimating causal effects of binary interventions in settings with longitudinal data, emphasizing practical advice for empirical researchers. It pays particular attention to heterogeneity in the causal effects, often in situations where few units are treated and with particular structures on the assignment pattern. The literature has extended earlier work on difference-in-differences or two-way-fixed-effect estimators. It has more generally incorporated factor models or interactive fixed effects. It has also developed novel methods using synthetic control approaches.
This paper introduces a flexible local projection that generalizes the model by Jordá (2005) to a non-parametric setting using Bayesian Additive Regression Trees. Monte Carlo experiments show that our BART-LP model is able to capture non-linearities in the impulse responses. Our first application shows that the fiscal multiplier is stronger in recession than in expansion only in response to contractionary fiscal shocks, but not in response to expansionary fiscal shocks. We then show that financial shocks generate effects on the economy that increase more than proportionately in the size of the shock when the shock is negative, but not when the shock is positive.
We consider estimation in moment condition models and show that under any bound on identification strength, asymptotically admissible (i.e. undominated) estimators in a wide class of estimation problems must be uniformly continuous in the sample moment function. GMM estimators are in general discontinuous in the sample moments, and are thus inadmissible. We show, by contrast, that bagged, or bootstrap aggregated, GMM estimators as well as quasi-Bayes posterior means have superior continuity properties, while results in the literature imply that they are equivalent to GMM when identification is strong. In simulations calibrated to published instrumental variables specifications, we find that these alternatives often outperform GMM.
Fixed effect estimators of nonlinear panel data models suffer from the incidental parameter problem. This leads to two undesirable consequences in applied research: (1) point estimates are subject to large biases, and (2) confidence intervals have incorrect coverages. This paper proposes a simulation-based method for bias reduction. The method simulates data using the model with estimated individual effects, and finds values of parameters by equating fixed effect estimates obtained from observed and simulated data. The asymptotic framework provides consistency, bias correction, and asymptotic normality results. An application and simulations to female labor force participation illustrates the finite-sample performance of the method.
I establish primitive conditions for unconfoundedness in a coherent model that features heterogeneous treatment effects, spillovers, selection-on-observables, and network formation. I identify average partial effects under minimal exchangeability conditions. If social interactions are also anonymous, I derive a three-dimensional network propensity score, characterize its support conditions, relate it to recent work on network pseudo-metrics, and study extensions. I propose a two-step semiparametric estimator for a random coefficients model which is consistent and asymptotically normal as the number and size of the networks grows. I apply my estimator to a political participation intervention Uganda and a microfinance application in India.
In this work we introduce a unit averaging procedure to efficiently recover unit-specific parameters in a heterogeneous panel model. The procedure consists in estimating the parameter of a given unit using a weighted average of all the unit-specific parameter estimators in the panel. The weights of the average are determined by minimizing an MSE criterion we derive. We analyze the properties of the resulting minimum MSE unit averaging estimator in a local heterogeneity framework inspired by the literature on frequentist model averaging, and we derive the local asymptotic distribution of the estimator and the corresponding weights. The benefits of the procedure are showcased with an application to forecasting unemployment rates for a panel of German regions.
A key assumption of the differences-in-differences designs is that the average evolution of untreated potential outcomes is the same across different treatment cohorts: a parallel trends assumption. In this paper, we relax the parallel trend assumption by assuming a latent type variable and developing a type-specific parallel trend assumption. With a finite support assumption on the latent type variable and long pretreatment time periods, we show that an extremum classifier consistently estimates the type assignment. Based on the classification result, we propose a type-specific diff-in-diff estimator for type-specific ATT. By estimating the type-specific ATT, we study heterogeneity in treatment effect, in addition to heterogeneity in baseline outcomes.
We propose a one-to-many matching estimator of the average treatment effect based on propensity scores estimated by isotonic regression. This approach is predicated on the assumption of monotonicity in the propensity score function, a condition that can be justified in many economic applications. We show that the nature of the isotonic estimator can help us to fix many problems of existing matching methods, including efficiency, choice of the number of matches, choice of tuning parameters, robustness to propensity score misspecification, and bootstrap validity. As a by-product, a uniformly consistent isotonic estimator is developed for our proposed matching method.
This paper examines the local linear regression (LLR) estimate of the conditional distribution function $F(y|x)$. We derive three uniform convergence results: the uniform bias expansion, the uniform convergence rate, and the uniform asymptotic linear representation. The uniformity in the above results is with respect to both $x$ and $y$ and therefore has not previously been addressed in the literature on local polynomial regression. Such uniform convergence results are especially useful when the conditional distribution estimator is the first stage of a semiparametric estimator. We demonstrate the usefulness of these uniform results with two examples: the stochastic equicontinuity condition in $y$, and the estimation of the integrated conditional distribution function.
We present simple low-level conditions for identification in regression discontinuity designs using a potential outcome framework for the manipulation of the running variable. Using this framework, we replace the existing identification statement with two restrictions on manipulation. Our framework highlights the critical role of the continuous density of the running variable in identification. In particular, we establish the low-level auxiliary assumption of the diagnostic density test under which the design may detect manipulation against identification and hence is manipulation-robust.