Semantic Scholar Open Access 2014 21758 sitasi

Explaining and Harnessing Adversarial Examples

I. Goodfellow Jonathon Shlens Christian Szegedy

Abstrak

Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.

Penulis (3)

I

I. Goodfellow

J

Jonathon Shlens

C

Christian Szegedy

Format Sitasi

Goodfellow, I., Shlens, J., Szegedy, C. (2014). Explaining and Harnessing Adversarial Examples. https://www.semanticscholar.org/paper/bee044c8e8903fb67523c1f8c105ab4718600cdb

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2014
Bahasa
en
Total Sitasi
21758×
Sumber Database
Semantic Scholar
Akses
Open Access ✓