Semantic Scholar Open Access 2020 848 sitasi

SplitFed: When Federated Learning Meets Split Learning

Chandra Thapa Pathum Chamikara Mahawaga Arachchige Seyit Ahmet Camtepe

Abstrak

Federated learning (FL) and split learning (SL) are two popular distributed machine learning approaches. Both follow a model-to-data scenario; clients train and test machine learning models without sharing raw data. SL provides better model privacy than FL due to the machine learning model architecture split between clients and the server. Moreover, the split model makes SL a better option for resource-constrained environments. However, SL performs slower than FL due to the relay-based training across multiple clients. In this regard, this paper presents a novel approach, named splitfed learning (SFL), that amalgamates the two approaches eliminating their inherent drawbacks, along with a refined architectural configuration incorporating differential privacy and PixelDP to enhance data privacy and model robustness. Our analysis and empirical results demonstrate that (pure) SFL provides similar test accuracy and communication efficiency as SL while significantly decreasing its computation time per global epoch than in SL for multiple clients. Furthermore, as in SL, its communication efficiency over FL improves with the number of clients. Besides, the performance of SFL with privacy and robustness measures is further evaluated under extended experimental settings.

Topik & Kata Kunci

Penulis (3)

C

Chandra Thapa

P

Pathum Chamikara Mahawaga Arachchige

S

Seyit Ahmet Camtepe

Format Sitasi

Thapa, C., Arachchige, P.C.M., Camtepe, S.A. (2020). SplitFed: When Federated Learning Meets Split Learning. https://doi.org/10.1609/aaai.v36i8.20825

Akses Cepat

Lihat di Sumber doi.org/10.1609/aaai.v36i8.20825
Informasi Jurnal
Tahun Terbit
2020
Bahasa
en
Total Sitasi
848×
Sumber Database
Semantic Scholar
DOI
10.1609/aaai.v36i8.20825
Akses
Open Access ✓