DOAJ
Open Access
2025
On non-approximability of zero loss global L2 minimizers by gradient descent in deep learning
Chen Thomas
Muñoz Ewald Patricia
Abstrak
We analyze geometric aspects of the gradient descent algorithm in Deep Learning (DL), and give a detailed discussion of the circumstance that, in underparametrized DL networks, zero loss minimization cannot generically be attained. As a consequence, we conclude that the distribution of training inputs must necessarily be non-generic in order to produce zero loss minimizers, both for the method constructed in [2, 3], or for gradient descent [1] (which assume clustering of training data).
Topik & Kata Kunci
Penulis (2)
C
Chen Thomas
M
Muñoz Ewald Patricia
Akses Cepat
Informasi Jurnal
- Tahun Terbit
- 2025
- Sumber Database
- DOAJ
- DOI
- 10.2298/TAM250121008C
- Akses
- Open Access ✓