Semantic Scholar Open Access 2015 7538 sitasi

Learning both Weights and Connections for Efficient Neural Network

Song Han Jeff Pool J. Tran W. Dally

Abstrak

Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.

Topik & Kata Kunci

Penulis (4)

S

Song Han

J

Jeff Pool

J

J. Tran

W

W. Dally

Format Sitasi

Han, S., Pool, J., Tran, J., Dally, W. (2015). Learning both Weights and Connections for Efficient Neural Network. https://www.semanticscholar.org/paper/1ff9a37d766e3a4f39757f5e1b235a42dacf18ff

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2015
Bahasa
en
Total Sitasi
7538×
Sumber Database
Semantic Scholar
Akses
Open Access ✓