arXiv Open Access 2025

Limits To (Machine) Learning

Zhimin Chen Bryan Kelly Semyon Malamud
Lihat Sumber

Abstrak

Machine learning (ML) methods are highly flexible, but their ability to approximate the true data-generating process is fundamentally constrained by finite samples. We characterize a universal lower bound, the Limits-to-Learning Gap (LLG), quantifying the unavoidable discrepancy between a model's empirical fit and the population benchmark. Recovering the true population $R^2$, therefore, requires correcting observed predictive performance by this bound. Using a broad set of variables, including excess returns, yields, credit spreads, and valuation ratios, we find that the implied LLGs are large. This indicates that standard ML approaches can substantially understate true predictability in financial data. We also derive LLG-based refinements to the classic Hansen and Jagannathan (1991) bounds, analyze implications for parameter learning in general-equilibrium settings, and show that the LLG provides a natural mechanism for generating excess volatility.

Topik & Kata Kunci

Penulis (3)

Z

Zhimin Chen

B

Bryan Kelly

S

Semyon Malamud

Format Sitasi

Chen, Z., Kelly, B., Malamud, S. (2025). Limits To (Machine) Learning. https://arxiv.org/abs/2512.12735

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓