Semantic Scholar Open Access 2013 16366 sitasi

Intriguing properties of neural networks

Christian Szegedy Wojciech Zaremba I. Sutskever Joan Bruna D. Erhan +2 lainnya

Abstrak

Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.

Topik & Kata Kunci

Penulis (7)

C

Christian Szegedy

W

Wojciech Zaremba

I

I. Sutskever

J

Joan Bruna

D

D. Erhan

I

I. Goodfellow

R

R. Fergus

Format Sitasi

Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. et al. (2013). Intriguing properties of neural networks. https://www.semanticscholar.org/paper/d891dc72cbd40ffaeefdc79f2e7afe1e530a23ad

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2013
Bahasa
en
Total Sitasi
16366×
Sumber Database
Semantic Scholar
Akses
Open Access ✓