DOAJ Open Access 2025

Dynamic differential privacy technique for deep learning models

Emad Elabd

Abstrak

Abstract Deep learning is a field within artificial intelligence that uses large datasets to train models capable of recognizing patterns and making predictions. One of the serious challenges facing model creators during data training is preserving the privacy of the data. Adversaries use membership inference attack to expose the privacy of the data used in training the model. They identify whether a specific data point was used in the training process or not. To protect these models against this type of attack, differential privacy approach can be used. Differential privacy involves adding noise to the training weights during the data training phase according to a specific probability distribution. At every step during the data training process, a fixed amount of noise is consistently added to the training weights. This paper utilizes a modified version of the differential privacy technique to defend against membership inference attacks. The new algorithm does not add regular noise to the training weights in each step but it adds it randomly during training process. Adding noise randomly increases the randomness in training phase and decrease the chances of prediction in the case of an attack. The model performance is evaluated using a set of metrics, including accuracy, precision, recall, F1 score, and the privacy budget ( $$\epsilon$$ ). The results demonstrate that the Gaussian Randomized Noise Differentially Private Stochastic Gradient Descent (Gaussian RanN-DP-SGD) approach consistently outperforms other standard Differential Privacy (DP) methods across accuracy, precision, recall, and F1 score. Regarding privacy preservation, the Gaussian RanN-DP-SGD method achieves the most favorable privacy-utility trade-off, maintaining a satisfactory balance between model utility and user privacy. Notably, it delivers acceptable performance within a privacy budget range of $$\epsilon = 1-2$$ , which is suitable for most practical applications.

Topik & Kata Kunci

Penulis (1)

E

Emad Elabd

Format Sitasi

Elabd, E. (2025). Dynamic differential privacy technique for deep learning models. https://doi.org/10.1038/s41598-025-27708-0

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1038/s41598-025-27708-0
Informasi Jurnal
Tahun Terbit
2025
Sumber Database
DOAJ
DOI
10.1038/s41598-025-27708-0
Akses
Open Access ✓