arXiv Open Access 2024

SSFL: Discovering Sparse Unified Subnetworks at Initialization for Efficient Federated Learning

Riyasat Ohib Bishal Thapaliya Gintare Karolina Dziugaite Jingyu Liu Vince Calhoun +1 lainnya
Lihat Sumber

Abstrak

In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication. SSFL identifies a sparse subnetwork prior to training, leveraging parameter saliency scores computed separately on local client data in non-IID scenarios, and then aggregated, to determine a global mask. Only the sparse model weights are trained and communicated each round between the clients and the server. On standard benchmarks including CIFAR-10, CIFAR-100, and Tiny-ImageNet, SSFL consistently improves the accuracy sparsity trade off, achieving more than 20\% relative error reduction on CIFAR-10 compared to the strongest sparse baseline, while reducing communication costs by $2 \times$ relative to dense FL. Finally, in a real-world federated learning deployment, SSFL delivers over $2.3 \times$ faster communication time, underscoring its practical efficiency.

Topik & Kata Kunci

Penulis (6)

R

Riyasat Ohib

B

Bishal Thapaliya

G

Gintare Karolina Dziugaite

J

Jingyu Liu

V

Vince Calhoun

S

Sergey Plis

Format Sitasi

Ohib, R., Thapaliya, B., Dziugaite, G.K., Liu, J., Calhoun, V., Plis, S. (2024). SSFL: Discovering Sparse Unified Subnetworks at Initialization for Efficient Federated Learning. https://arxiv.org/abs/2405.09037

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓