Semantic Scholar Open Access 2020 1890 sitasi

Adaptive Federated Optimization

Sashank J. Reddi Zachary B. Charles M. Zaheer Zachary Garrett Keith Rush +3 lainnya

Abstrak

Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data. Due to the heterogeneity of the client datasets, standard federated optimization methods such as Federated Averaging (FedAvg) are often difficult to tune and exhibit unfavorable convergence behavior. In non-federated settings, adaptive optimization methods have had notable success in combating such issues. In this work, we propose federated versions of adaptive optimizers, including Adagrad, Adam, and Yogi, and analyze their convergence in the presence of heterogeneous data for general nonconvex settings. Our results highlight the interplay between client heterogeneity and communication efficiency. We also perform extensive experiments on these methods and show that the use of adaptive optimizers can significantly improve the performance of federated learning.

Penulis (8)

S

Sashank J. Reddi

Z

Zachary B. Charles

M

M. Zaheer

Z

Zachary Garrett

K

Keith Rush

J

Jakub Konecný

S

Sanjiv Kumar

H

H. B. McMahan

Format Sitasi

Reddi, S.J., Charles, Z.B., Zaheer, M., Garrett, Z., Rush, K., Konecný, J. et al. (2020). Adaptive Federated Optimization. https://www.semanticscholar.org/paper/47c528344fedb6cb67a38e43d095b41c34715330

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2020
Bahasa
en
Total Sitasi
1890×
Sumber Database
Semantic Scholar
Akses
Open Access ✓