DOAJ Open Access 2026

Explainable AI: enhancing decision-making in the detection of cyber threats

P. W. C. Prasad Md Shohel Sayeed Duc-Man Nguyen Daniel Patricko Hutabarat Golam Md Mohiuddin

Abstrak

The rapid growth of the Internet and the increasing reliance on digital systems have significantly expanded the global digital footprint, creating new challenges for cybersecurity. Artificial Intelligence (AI) technologies, particularly Machine Learning (ML) and Deep Learning (DL), have become central to addressing these challenges by enabling the automation of complex and data-intensive tasks across antivirus solutions, intrusion prevention systems, threat intelligence platforms, and email security tools. While these technologies provide high levels of accuracy in detecting anomalies, malware, and other forms of malicious activity, they are often criticized for operating as “black-box” systems. The lack of interpretability in their decision-making processes limits the ability of cybersecurity professionals to fully understand, validate, and trust the outcomes of AI-driven models, thereby restricting their practical adoption in high-stakes environments. To mitigate these limitations, Explainable Artificial Intelligence (XAI) has emerged as a promising paradigm that aims to make AI outputs transparent, interpretable, and actionable. By providing human-understandable explanations of automated decisions, XAI can bridge the gap between technical performance and practitioner usability, enabling analysts to make informed decisions, improve incident response, and strengthen organizational resilience against both known and emerging threats. This paper reviews recent state-of-the-art developments in XAI for cybersecurity, with a particular emphasis on anomaly detection a critical area for identifying insider threats, zero-day exploits, and atypical system behavior. The review follows a structured literature analysis of peer-reviewed studies published between 2018 and 2025, identified through systematic searches in major academic databases including IEEE Xplore, Scopus, Web of Science, and ACM Digital Library. After applying predefined inclusion and exclusion criteria focused on XAI applications in cybersecurity, 53 relevant studies were analysed to synthesize methodological trends, application domains, and evaluation practices. Drawing on these findings, the paper consolidates fragmented research contributions, identifies current gaps, and provides recommendations for advancing the design and adoption of explainable, trustworthy AI systems in cybersecurity. The analysis further highlights a critical deployment challenge: the integration of explainability mechanisms often introduces trade-offs between predictive accuracy, computational efficiency, and real-time scalability factors that are essential in operational cybersecurity environments.

Penulis (5)

P

P. W. C. Prasad

M

Md Shohel Sayeed

D

Duc-Man Nguyen

D

Daniel Patricko Hutabarat

G

Golam Md Mohiuddin

Format Sitasi

Prasad, P.W.C., Sayeed, M.S., Nguyen, D., Hutabarat, D.P., Mohiuddin, G.M. (2026). Explainable AI: enhancing decision-making in the detection of cyber threats. https://doi.org/10.3389/fcomp.2026.1762332

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.3389/fcomp.2026.1762332
Informasi Jurnal
Tahun Terbit
2026
Sumber Database
DOAJ
DOI
10.3389/fcomp.2026.1762332
Akses
Open Access ✓