Semantic Scholar Open Access 2015 1743 sitasi

Training Very Deep Networks

R. Srivastava Klaus Greff J. Schmidhuber

Abstrak

Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures.

Topik & Kata Kunci

Penulis (3)

R

R. Srivastava

K

Klaus Greff

J

J. Schmidhuber

Format Sitasi

Srivastava, R., Greff, K., Schmidhuber, J. (2015). Training Very Deep Networks. https://www.semanticscholar.org/paper/b92aa7024b87f50737b372e5df31ef091ab54e62

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2015
Bahasa
en
Total Sitasi
1743×
Sumber Database
Semantic Scholar
Akses
Open Access ✓