Semantic Scholar Open Access 2017 11536 sitasi

mixup: Beyond Empirical Risk Minimization

Hongyi Zhang Moustapha Cissé Yann Dauphin David Lopez-Paz

Abstrak

Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.

Penulis (4)

H

Hongyi Zhang

M

Moustapha Cissé

Y

Yann Dauphin

D

David Lopez-Paz

Format Sitasi

Zhang, H., Cissé, M., Dauphin, Y., Lopez-Paz, D. (2017). mixup: Beyond Empirical Risk Minimization. https://www.semanticscholar.org/paper/4feef0fd284feb1233399b400eb897f59ec92755

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2017
Bahasa
en
Total Sitasi
11536×
Sumber Database
Semantic Scholar
Akses
Open Access ✓