Trustworthy AI for medical decisions: Adversarially robust and fair machine learning prediction for Parkinson's disease.
Abstrak
Parkinson's disease (PD) is a neurodegenerative disorder characterized by motor and non-motor symptoms, including tremor, rigidity, and postural instability. Machine learning (ML) models have shown promise for the diagnosis of PD; however, many existing approaches do not explicitly address fairness and robustness. As a result, these models can lead to biased outcomes across demographic groups and vulnerability to adversarial attacks. In this study, we used the Parkinson's Progression Markers Initiative (PPMI) cohort, which includes clinical and demographic information from 1,084 participants spanning diverse age, sex, and racial groups. Our study addresses the key challenge of developing robust and equitable ML models to diagnose the progression of PD. We evaluated the performance of two fairness-optimized classifiers, namely, Random Forest (RF) and Decision Tree (DT). To evaluate model vulnerability, we applied adversarial techniques, specifically label leakage and data poisoning attacks, which simulate intentional or erroneous data alterations that can amplify biases and degrade accuracy. These adversarial manipulations substantially degraded model performance; specifically, DT accuracy declined by more than 10% between sensitive groups. The accuracy of the RF model decreased by 20%. Moreover, under attack, fairness metrics such as Statistical Parity Difference (SPD), which looks at differences in the chances of getting a positive prediction across demographic groups, and Equal Opportunity Difference (EOD) for differences in true positive rates between groups, both showed a decline. This pattern suggests that adversarial perturbations increased bias and widened performance disparities across demographic groups. Our results demonstrated that adversarial attacks increased the incidence of false positives and false negatives, thereby lowering the accuracy and fairness of the PD diagnostic predictions. These findings underscore the urgent need for robust and fairness-aware defenses in medical AI to mitigate racial, age, and gender disparities and ensure a reliable clinical decision-making process.
Penulis (5)
Junaid Muhammad
Mitra Ghergherehchi
Shiraz Ali
Ho Seung Song
Nasir Rahim
Akses Cepat
PDF tidak tersedia langsung
Cek di sumber asli →- Tahun Terbit
- 2026
- Sumber Database
- DOAJ
- DOI
- 10.1371/journal.pone.0342062
- Akses
- Open Access ✓