arXiv Open Access 2025

Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach

Huazi Pan Yanjun Zhang Leo Yu Zhang Scott Adams Abbas Kouzani +1 lainnya
Lihat Sumber

Abstrak

Manipulation of local training data and local updates, i.e., the poisoning attack, is the main threat arising from the collaborative nature of the federated learning (FL) paradigm. Most existing poisoning attacks aim to manipulate local data/models in a way that causes denial-of-service (DoS) issues. In this paper, we introduce a novel attack method, named Federated Learning Sliding Attack (FedSA) scheme, aiming at precisely introducing the extent of poisoning in a subtle controlled manner. It operates with a predefined objective, such as reducing global model's prediction accuracy by 10%. FedSA integrates robust nonlinear control-Sliding Mode Control (SMC) theory with model poisoning attacks. It can manipulate the updates from malicious clients to drive the global model towards a compromised state, achieving this at a controlled and inconspicuous rate. Additionally, leveraging the robust control properties of FedSA allows precise control over the convergence bounds, enabling the attacker to set the global accuracy of the poisoned model to any desired level. Experimental results demonstrate that FedSA can accurately achieve a predefined global accuracy with fewer malicious clients while maintaining a high level of stealth and adjustable learning rates.

Topik & Kata Kunci

Penulis (6)

H

Huazi Pan

Y

Yanjun Zhang

L

Leo Yu Zhang

S

Scott Adams

A

Abbas Kouzani

S

Suiyang Khoo

Format Sitasi

Pan, H., Zhang, Y., Zhang, L.Y., Adams, S., Kouzani, A., Khoo, S. (2025). Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach. https://arxiv.org/abs/2505.16403

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓