arXiv Open Access 2024

Inverting Gradient Attacks Makes Powerful Data Poisoning

Wassim Bouaziz El-Mahdi El-Mhamdi Nicolas Usunier
Lihat Sumber

Abstrak

Gradient attacks and data poisoning tamper with the training of machine learning algorithms to maliciously alter them and have been proven to be equivalent in convex settings. The extent of harm these attacks can produce in non-convex settings is still to be determined. Gradient attacks can affect far less systems than data poisoning but have been argued to be more harmful since they can be arbitrary, whereas data poisoning reduces the attacker's power to only being able to inject data points to training sets, via e.g. legitimate participation in a collaborative dataset. This raises the question of whether the harm made by gradient attacks can be matched by data poisoning in non-convex settings. In this work, we provide a positive answer in a worst-case scenario and show how data poisoning can mimic a gradient attack to perform an availability attack on (non-convex) neural networks. Through gradient inversion, commonly used to reconstruct data points from actual gradients, we show how reconstructing data points out of malicious gradients can be sufficient to perform a range of attacks. This allows us to show, for the first time, an availability attack on neural networks through data poisoning, that degrades the model's performances to random-level through a minority (as low as 1%) of poisoned points.

Topik & Kata Kunci

Penulis (3)

W

Wassim Bouaziz

E

El-Mahdi El-Mhamdi

N

Nicolas Usunier

Format Sitasi

Bouaziz, W., El-Mhamdi, E., Usunier, N. (2024). Inverting Gradient Attacks Makes Powerful Data Poisoning. https://arxiv.org/abs/2410.21453

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓