Explainable AI for Intrusion Detection Systems: Enhancing Trust, Transparency, and Real-Time Threat Response
DOI:
https://doi.org/10.63282/3050-9416.IJAIBDCMS-V7I2P119Keywords:
Explainable Artificial Intelligence, Intrusion Detection Systems, Cybersecurity, Machine Learning, Transparency, Trust, Real-Time Threat DetectionAbstract
The growing sophistication of cyber threats has exposed the limitations of conventional intrusion detection systems that depend on static signatures and rule-based detection. Although machine learning has improved the ability to identify malicious traffic patterns, many high-performing models remain difficult to interpret, reducing trust and limiting operational adoption. This study develops and evaluates a real-time explainable intrusion detection framework that combines predictive accuracy with transparent decision support. Using the NSL-KDD and CICIDS2017 benchmark datasets, the study implemented Random Forest and Deep Neural Network models under a stratified training, validation, and testing protocol with repeated experimental runs. Data preprocessing included normalization, feature engineering, imbalance correction, and hyperparameter optimization. Explainability was integrated through SHAP and LIME to generate both global and case-specific interpretations of model predictions. The results show that both models achieved strong classification performance, while the Deep Neural Network produced higher recall and ROC-AUC under more complex traffic conditions. Random Forest delivered lower inference latency and competitive precision. The inclusion of explainability introduced only modest processing overhead while significantly improving interpretability, alert transparency, and analyst usability. The study contributes a unified evaluation of predictive performance, explanation quality, and real-time response efficiency, supported by a deployment-oriented framework for practical security environments. The findings indicate that effective intrusion detection systems should be judged not only by accuracy, but also by how clearly and rapidly they support human decision-making. This work advances the development of trustworthy, accountable, and operationally effective cybersecurity systems.
References
1. Udofot, A. I., Oluseyi, O. M., & Bassey, E. (2024). Explainable AI for cyber security. Improving transparency and trust in intrusion detection systems. International Journal of Advances in Engineering and Management, 6(12), 229-240.
2. Mohale, V. Z., & Obagbuwa, I. C. (2025). Evaluating machine learning-based intrusion detection systems with explainable AI: enhancing transparency and interpretability. Frontiers in Computer Science, 7, 1520741.
3. Wang, Y., Azad, M. A., Zafar, M., & Gul, A. (2025). Enhancing AI transparency in IoT intrusion detection using explainable AI techniques. Internet of Things, 101714.
4. Kwubeghari, A., & Ezeji, N. G. (2025). Designing an Explainable Intrusion Detection System (X-Ids) Using Machine Learning: A Framework for Transparency and Trust. ABUAD Journal of Engineering Research and Development (AJERD), 8(2), 319-328.
5. Alshudukhi, K. S., Ali, S., Humayun, M., & Alruwaili, O. (2025). Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation. Computer Modeling in Engineering & Sciences, 145(3), 3029.
6. Al Rawajbeh, M., Maria Soosai, A. J., Ramasamy, L. K., & Khan, F. (2025). Trustworthy adaptive AI for real-time intrusion detection in industrial IoT security. IoT, 6(3), 53.
7. Islam, M. T., Syfullah, M. K., Rashed, M. G., & Das, D. (2024). Bridging the gap: advancing the transparency and trustworthiness of network intrusion detection with explainable AI. International Journal of Machine Learning and Cybernetics, 15(11), 5337-5360.
8. Chandi, A. A. (2025). EXPLAINABLE ARTIFICIAL INTELLIGENCE APPLICATIONS IN CYBERSECURITY: ENHANCING TRANSPARENCY IN INTRUSION DETECTION SYSTEMS. International Journal of Applied Mathematics, 38(11s), 1239-1253.
9. Moustafa, N., Koroniotis, N., Keshk, M., Zomaya, A. Y., & Tari, Z. (2023). Explainable intrusion detection for cyber defences in the internet of things: Opportunities and solutions. IEEE Communications Surveys & Tutorials, 25(3), 1775-1807.
10. Haider, Z., & Sharif, F. (2024). Explainable AI in Cybersecurity: Enhancing Trust and Transparency in Threat Detection.
11. Malik, S. (2024). Explainable AI for Cybersecurity: Improving Transparency in Automated Threat Detection Systems.
12. Naif Alatawi, M. (2025). Enhancing intrusion detection systems with advanced machine learning techniques: an ensemble and explainable artificial intelligence (AI) approach. Security and Privacy, 8(1), e496.
13. Agarwal, G. (2025). Explainable AI (XAI) for cyber defense: Enhancing transparency and trust in AI-driven security solutions. International Journal of Advanced Research in Science, Communication and Technology, 5(1), 132-138.
14. Rahman, M., Ullah, S., Nahar, S., Hossain, M. S., Rahman, M., & Rahman, M. (2025). The Role of Explainable AI in cyber threat intelligence: Enhancing transparency and trust in security systems. World Journal of Advanced Research and Reviews, 23(2), 2897-2907.
15. Khan, N., Ahmad, K., Tamimi, A. A., Alani, M. M., Bermak, A., & Khalil, I. (2024). Explainable AI-based intrusion detection system for industry 5.0: an overview of the literature, associated challenges, the existing solutions, and potential research directions. arXiv preprint arXiv:2408.03335.
16. Khan, M. F., & Hassan, M. M. (2024). Explainable Ai and Machine Learning Models for Transparent and Scalable Intrusion Detection Systems. J. Inf. Syst. Eng. Manag, 9(4s), 1576-1588.
17. Lee, H., Kwon, T., Lee, J., & Song, J. (2024, November). Enhancing Decision-Making of Network Intrusion Analysis Assisted by Explainable AI for Real-Time Security Monitoring. In 2024 IEEE Conference on Dependable and Secure Computing (DSC) (pp. 147-154). IEEE.
18. Alabdulatif, A. (2025). A novel ensemble of deep learning approach for cybersecurity intrusion detection with explainable artificial intelligence. Applied Sciences, 15(14), 7984.
19. Eze, N. (2026). Human-Centered Explainable AI for Operational Trust in Water Distribution Intrusion Detection Systems.
20. Aminu, M., Akinsanya, A., Dako, D. A., & Oyedokun, O. (2024). Enhancing cyber threat detection through real-time threat intelligence and adaptive defense mechanisms. International Journal of Computer Applications Technology and Research, 13(8), 11-27.
21. Visave, J. (2025). Transparency in AI for emergency management: building trust and accountability. AI and Ethics, 5(4), 3967-3980.
22. Damaraju, A. (2022). Adaptive Threat Intelligence: Enhancing Information Security Through Predictive Analytics and Real-Time Response Mechanisms. International Journal of Advanced Engineering Technologies and Innovations, 1(3), 82-120.
23. Owen, A., Solomon, M., & Peter, L. (2025). Trust and Transparency in Human-AI Collaboration: Building Reliable Real-Time Alerting Systems.
24. Zichen, R. (2022). AI-driven threat detection in Zero Trust environments. Available at SSRN 5146272.
25. Andrés, P., Nikolai, I., & Zhihao, W. (2025). Real-Time AI-Based Threat Intelligence for Cloud Security Enhancement. Innovative: International Multi-disciplinary Journal of Applied Technology, 3(3), 36-54.
26. Mohammed, A. (2025). Blockchain-Driven Cybersecurity Audits: Securing Financial Systems with Trust and Transparency. Authorea Preprints.
27. Lawal, K. (2025). Real-Time Threat Intelligence: AI-Driven Automation and Response.