Edge Caching with Federated Unlearning for Low-Latency V2X Communications
Abstrak
Vehicular-to-everything (V2X) communications have gained popularity as a cutting-edge technology in Internet of Vehicles (loV), ensuring low-latency communication for emerging transportation features. Federated learning (FL), a widely-used distributed collaborative AI approach, is transforming edge caching in V2X communications due to its exceptional privacy protection. However, current FL-based edge caching methods can negatively impact communication performance when non-independent and identically distributed (non-IID) data or invalid data, such as poisoned data, are introduced during the training process. In this article, we present FedFilter, an FL-based edge caching solution designed to address these challenges. FedFilter employs a personalized FL method based on model decomposition and hierarchical aggregation, caching content tailored to the diverse preferences of individual users. This enhances the cache hit rate, reducing backhaul load and service latency. Moreover, FedFilter detects and mitigates the adverse effects of invalid data on the global model, ensuring the Quality of Service (QoS) of V2X communications. A case study is introduced to demonstrate the effectiveness of FedFilter, showing that it not only reduces latency but also effectively removes invalid data while maintaining a high cache hit rate.
Topik & Kata Kunci
Penulis (8)
Pengfei Wang
Zhaohong Yan
Mohammad S. Obaidat
Zhiwei Yuan
Leyou Yang
Junxiang Zhang
Zongzheng Wei
Qiang Zhang
Akses Cepat
- Tahun Terbit
- 2024
- Bahasa
- en
- Total Sitasi
- 31×
- Sumber Database
- Semantic Scholar
- DOI
- 10.1109/MCOM.001.2300272
- Akses
- Open Access ✓