Data-Bound Adaptive Federated Learning: FedAdaDB
Abstrak
Federated Learning (FL) enables decentralized Machine Learning (ML), focusing on preserving data privacy, but faces a unique set of optimization challenges, such as dealing with non-IID data, communication overhead, and client drift. Adaptive optimizers like AdaGrad, Adam, and Adam variations have been applied in FL, showing good results in convergence speed and accuracy. However, it can be quite challenging to combine good convergence, model generalization, and stability in an FL setup. Data-bound adaptive methods like AdaDB have demonstrated promising results in centralized settings by incorporating dynamic, data-dependent bounds on Learning Rates (LRs). In this paper, FedAdaDB is introduced, which is an FL version of AdaDB aiming to address the aforementioned challenges. FedAdaDB uses the AdaDB optimizer at the server-side to dynamically adjust LR bounds based on the aggregated client updates. Extensive experiments have been conducted comparing FedAdaDB with FedAvg and FedAdam on three different datasets (EMNIST, CIFAR100, and Shakespeare). The results show that FedAdaDB consistently offers better and more robust outcomes, in terms of the measured final validation accuracy across all datasets, for a trade-off of a small delay in the convergence speed at an early stage.
Penulis (2)
Fotios Zantalis
Grigorios Koulouras
Akses Cepat
- Tahun Terbit
- 2025
- Bahasa
- en
- Sumber Database
- CrossRef
- DOI
- 10.3390/iot6030035
- Akses
- Open Access ✓