arXiv Open Access 2025

A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning

Daniel Commey Rebecca A. Sarpong Griffith S. Klogo Winful Bagyl-Bac Garth V. Crosby
Lihat Sumber

Abstrak

Federated learning (FL) enables collaborative model training across decentralized clients while preserving data privacy. However, its open-participation nature exposes it to data-poisoning attacks, in which malicious actors submit corrupted model updates to degrade the global model. Existing defenses are often reactive, relying on statistical aggregation rules that can be computationally expensive and that typically assume an honest majority. This paper introduces a proactive, economic defense: a lightweight Bayesian incentive mechanism that makes malicious behavior economically irrational. Each training round is modeled as a Bayesian game of incomplete information in which the server, acting as the principal, uses a small, private validation dataset to verify update quality before issuing payments. The design satisfies Individual Rationality (IR) for benevolent clients, ensuring their participation is profitable, and Incentive Compatibility (IC), making poisoning an economically dominated strategy. Extensive experiments on non-IID partitions of MNIST and FashionMNIST demonstrate robustness: with 50% label-flipping adversaries on MNIST, the mechanism maintains 96.7% accuracy, only 0.3 percentage points lower than in a scenario with 30% label-flipping adversaries. This outcome is 51.7 percentage points better than standard FedAvg, which collapses under the same 50% attack. The mechanism is computationally light, budget-bounded, and readily integrates into existing FL frameworks, offering a practical route to economically robust and sustainable FL ecosystems.

Topik & Kata Kunci

Penulis (5)

D

Daniel Commey

R

Rebecca A. Sarpong

G

Griffith S. Klogo

W

Winful Bagyl-Bac

G

Garth V. Crosby

Format Sitasi

Commey, D., Sarpong, R.A., Klogo, G.S., Bagyl-Bac, W., Crosby, G.V. (2025). A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning. https://arxiv.org/abs/2507.12439

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓