ForeComp: An R Package for Comparing Predictive Accuracy Using Fixed-Smoothing Asymptotics
Minchul Shin, Nathan Schor
We introduce ForeComp, an R package for comparing predictive accuracy using Diebold-Mariano type tests of equal predictive ability with standard and fixed smoothing inference. The package provides a common interface for loss differential based testing and includes Plot Tradeoff, a visual diagnostic for bandwidth sensitivity and the size-power tradeoff. We illustrate the toolkit with Survey of Professional Forecasters applications and Monte Carlo evidence on finite-sample performance.
Fast and simple inner-loop algorithms of static / dynamic BLP estimations
Takeshi Fukasawa
This study investigates computationally efficient inner-loop algorithms for estimating static/dynamic BLP models. It provides the following ideas for reducing the number of inner-loop iterations: (1). Add a term relating to the outside option share in the BLP contraction mapping; (2). Analytically represent the mean product utilities as a function of value functions and solve for value functions (for dynamic BLP); (3). Combine an acceleration method of fixed-point iterations, especially the Anderson acceleration. They are independent and easy to implement. This study shows the good performance of these methods using numerical experiments.
Conduct Parameter Estimation in Homogeneous Goods Markets with Equilibrium Existence and Uniqueness Conditions: The Case of Log-linear Specification
Yuri Matsumura, Suguru Otani
We propose a constrained generalized method of moments (GMM) estimator with some equilibrium uniqueness conditions for estimating the conduct parameter in a log-linear model with homogeneous goods markets. Monte Carlo simulations demonstrate that merely imposing parameter restrictions leads to not just inaccurate estimations but also some numerical issues, and adding the equilibrium uniqueness conditions resolves them. We also suggest a formulation of the GMM estimation to further avoid the numerical issues.
Count Data Models with Heterogeneous Peer Effects under Rational Expectations
Aristide Houndetoungan
This paper develops a peer effect model for count responses under rational expectations. The model accounts for heterogeneity in peer effects across groups based on observed characteristics. Identification is based on the linear model condition that requires the presence of friends of friends who are not direct friends. I show that this identification condition extends to a broad class of nonlinear models. Parameters are estimated using a nested pseudo-likelihood approach. An empirical application to students' extracurricular participation reveals that females are more responsive to peers than males. An easy-to-use R package, CDatanet, is available for implementing the model.
On the Existence of One-Sided Representations for the Generalised Dynamic Factor Model
Philipp Gersing
We show that the common component of the Generalised Dynamic Factor Model (GDFM) can be represented using only current and past observations basically whenever it is purely non-deterministic.
Ranking and Selection from Pairwise Comparisons: Empirical Bayes Methods for Citation Analysis
Jiaying Gu, Roger Koenker
We study the Stigler model of citation flows among journals adapting the pairwise comparison model of Bradley and Terry to do ranking and selection of journal influence based on nonparametric empirical Bayes procedures. Comparisons with several other rankings are made.
Probabilistic Prediction for Binary Treatment Choice: with focus on personalized medicine
Charles F. Manski
This paper extends my research applying statistical decision theory to treatment choice with sample data, using maximum regret to evaluate the performance of treatment rules. The specific new contribution is to study as-if optimization using estimates of illness probabilities in clinical choice between surveillance and aggressive treatment. Beyond its specifics, the paper sends a broad message. Statisticians and computer scientists have addressed conditional prediction for decision making in indirect ways, the former applying classical statistical theory and the latter measuring prediction accuracy in test samples. Neither approach is satisfactory. Statistical decision theory provides a coherent, generally applicable methodology.
Inference in Incomplete Models
Alfred Galichon, Marc Henry
We provide a test for the specification of a structural model without identifying assumptions. We show the equivalence of several natural formulations of correct specification, which we take as our null hypothesis. From a natural empirical version of the latter, we derive a Kolmogorov-Smirnov statistic for Choquet capacity functionals, which we use to construct our test. We derive the limiting distribution of our test statistic under the null, and show that our test is consistent against certain classes of alternatives. When the model is given in parametric form, the test can be inverted to yield confidence regions for the identified parameter set. The approach can be applied to the estimation of models with sample selection, censored observables and to games with multiple equilibria.
Identifying and Estimating Perceived Returns to Binary Investments
Clint Harris
I describe a method for estimating agents' perceived returns to investments that relies on cross-sectional data containing binary choices and prices, where prices may be imperfectly known to agents. This method identifies the scale of perceived returns by assuming agent knowledge of an identity that relates profits, revenues, and costs rather than by eliciting or assuming agent beliefs about structural parameters that are estimated by researchers. With this assumption, modest adjustments to standard binary choice estimators enable consistent estimation of perceived returns when using price instruments that are uncorrelated with unobserved determinants of agents' price misperceptions as well as other unobserved determinants of their perceived returns. I demonstrate the method, and the importance of using price variation that is known to agents, in a series of data simulations.
Spectral Targeting Estimation of $Ξ»$-GARCH models
Simon Hetland
This paper presents a novel estimator of orthogonal GARCH models, which combines (eigenvalue and -vector) targeting estimation with stepwise (univariate) estimation. We denote this the spectral targeting estimator. This two-step estimator is consistent under finite second order moments, while asymptotic normality holds under finite fourth order moments. The estimator is especially well suited for modelling larger portfolios: we compare the empirical performance of the spectral targeting estimator to that of the quasi maximum likelihood estimator for five portfolios of 25 assets. The spectral targeting estimator dominates in terms of computational complexity, being up to 57 times faster in estimation, while both estimators produce similar out-of-sample forecasts, indicating that the spectral targeting estimator is well suited for high-dimensional empirical applications.
Assessing Inference Methods
Bruno Ferman
We analyze different types of simulations that applied researchers can use to assess whether their inference methods reliably control false-positive rates. We show that different assessments involve trade-offs, varying in the types of problems they may detect, finite-sample performance, susceptibility to sequential-testing distortions, susceptibility to cherry-picking, and implementation complexity. We also show that a commonly used simulation to assess inference methods in shift-share designs can lead to misleading conclusions and propose alternatives. Overall, we provide novel insights and recommendations for applied researchers on how to choose, implement, and interpret inference assessments in their empirical applications.
Semiparametric correction for endogenous truncation bias with Vox Populi based participation decision
Nir Billfeld, Moshe Kim
We synthesize the knowledge present in various scientific disciplines for the development of semiparametric endogenous truncation-proof algorithm, correcting for truncation bias due to endogenous self-selection. This synthesis enriches the algorithm's accuracy, efficiency and applicability. Improving upon the covariate shift assumption, data are intrinsically affected and largely generated by their own behavior (cognition). Refining the concept of Vox Populi (Wisdom of Crowd) allows data points to sort themselves out depending on their estimated latent reference group opinion space. Monte Carlo simulations, based on 2,000,000 different distribution functions, practically generating 100 million realizations, attest to a very high accuracy of our model.
Indirect Inference for Locally Stationary Models
David Frazier, Bonsoo Koo
We propose the use of indirect inference estimation to conduct inference in complex locally stationary models. We develop a local indirect inference algorithm and establish the asymptotic properties of the proposed estimator. Due to the nonparametric nature of locally stationary models, the resulting indirect inference estimator exhibits nonparametric rates of convergence. We validate our methodology with simulation studies in the confines of a locally stationary moving average model and a new locally stationary multiplicative stochastic volatility model. Using this indirect inference methodology and the new locally stationary volatility model, we obtain evidence of non-linear, time-varying volatility trends for monthly returns on several Fama-French portfolios.
Identification in discrete choice models with imperfect information
Cristina Gualdani, Shruti Sinha
We study identification of preferences in static single-agent discrete choice models where decision makers may be imperfectly informed about the state of the world. We leverage the notion of one-player Bayes Correlated Equilibrium by Bergemann and Morris (2016) to provide a tractable characterization of the sharp identified set. We develop a procedure to practically construct the sharp identified set following a sieve approach, and provide sharp bounds on counterfactual outcomes of interest. We use our methodology and data on the 2017 UK general election to estimate a spatial voting model under weak assumptions on agents' information about the returns to voting. Counterfactual exercises quantify the consequences of imperfect information on the well-being of voters and parties.
Identification and estimation of multinomial choice models with latent special covariates
Nail Kashaev
Identification of multinomial choice models is often established by using special covariates that have full support. This paper shows how these identification results can be extended to a large class of multinomial choice models when all covariates are bounded. I also provide a new $\sqrt{n}$-consistent asymptotically normal estimator of the finite-dimensional parameters of the model.
Pricing Mechanism in Information Goods
Xinming Li, Huaqing Wang
We study three pricing mechanisms' performance and their effects on the participants in the data industry from the data supply chain perspective. A win-win pricing strategy for the players in the data supply chain is proposed. We obtain analytical solutions in each pricing mechanism, including the decentralized and centralized pricing, Nash Bargaining pricing, and revenue sharing mechanism.
A Growth Model with Unemployment
Mina Mahmoudi, Mark Pingle
A standard growth model is modified in a straightforward way to incorporate what Keynes (1936) suggests in the "essence" of his general theory. The theoretical essence is the idea that exogenous changes in investment cause changes in employment and unemployment. We implement this idea by assuming the path for capital growth rate is exogenous in the growth model. The result is a growth model that can explain both long term trends and fluctuations around the trend. The modified growth model was tested using the U.S. economic data from 1947 to 2014. The hypothesized inverse relationship between the capital growth and changes in unemployment was confirmed, and the structurally estimated model fits fluctuations in unemployment reasonably well.
Bootstrap Methods in Econometrics
Joel L. Horowitz
The bootstrap is a method for estimating the distribution of an estimator or test statistic by re-sampling the data or a model estimated from the data. Under conditions that hold in a wide variety of econometric applications, the bootstrap provides approximations to distributions of statistics, coverage probabilities of confidence intervals, and rejection probabilities of hypothesis tests that are more accurate than the approximations of first-order asymptotic distribution theory. The reductions in the differences between true and nominal coverage or rejection probabilities can be very large. In addition, the bootstrap provides a way to carry out inference in certain settings where obtaining analytic distributional approximations is difficult or impossible. This article explains the usefulness and limitations of the bootstrap in contexts of interest in econometrics. The presentation is informal and expository. It provides an intuitive understanding of how the bootstrap works. Mathematical details are available in references that are cited.
The Impact of Supervision and Incentive Process in Explaining Wage Profile and Variance
Nitsa Kasir, Idit Sohlberg
The implementation of a supervision and incentive process for identical workers may lead to wage variance that stems from employer and employee optimization. The harder it is to assess the nature of the labor output, the more important such a process becomes, and the influence of such a process on wage development growth. The dynamic model presented in this paper shows that an employer will choose to pay a worker a starting wage that is less than what he deserves, resulting in a wage profile that fits the classic profile in the human-capital literature. The wage profile and wage variance rise at times of technological advancements, which leads to increased turnover as older workers are replaced by younger workers due to a rise in the relative marginal cost of the former.
Randomization Tests for Equality in Dependence Structure
Juwon Seo
We develop a new statistical procedure to test whether the dependence structure is identical between two groups. Rather than relying on a single index such as Pearson's correlation coefficient or Kendall's Tau, we consider the entire dependence structure by investigating the dependence functions (copulas). The critical values are obtained by a modified randomization procedure designed to exploit asymptotic group invariance conditions. Implementation of the test is intuitive and simple, and does not require any specification of a tuning parameter or weight function. At the same time, the test exhibits excellent finite sample performance, with the null rejection rates almost equal to the nominal level even when the sample size is extremely small. Two empirical applications concerning the dependence between income and consumption, and the Brexit effect on European financial market integration are provided.