Exploring the Ethical Acceptability of Explainability in Medical Artificial Intelligence
Abstrak
While medical artificial intelligence (AI) enhances the efficiency and accuracy of diagnosis and treatment, it also brings forth the "black-box" problem that is difficult to interpret. The explainability of medical AI, or "explainable medical AI", has become a focal topic in academia. Explainability is an ethical requirement for implementing responsible applications of artificial intelligence. On the premise of respecting patient interests and autonomy, multiple stakeholders should collaboratively ensure that "black-box" AI models benefit medical practice within controllable boundaries. To this end, the authors propose the concept of "ethical acceptability", review academic debates on the explainability of medical AI, analyze the core components of ethical acceptability, and construct a dynamic model to identify acceptance challenges across different contexts. The authors further propose baseline principles of minimum explanation obligation, parity of risk and responsibility, and a negotiated co-construction mechanism. These principles aim to support the development of a context-based hybrid interpretability framework.
Topik & Kata Kunci
Penulis (2)
Xuemei LYU
Rui DENG
Akses Cepat
PDF tidak tersedia langsung
Cek di sumber asli →- Tahun Terbit
- 2025
- Sumber Database
- DOAJ
- DOI
- 10.12014/j.issn.1002-0772.2025.13.01
- Akses
- Open Access ✓