A Self-Healing Generative AI Framework for Regulated Decision Workflows: A Healthcare Claims Case Study
DOI:
https://doi.org/10.63282/3050-9416.IJAIBDCMS-V5I3P117Keywords:
Generative AI, Self-Healing Systems, Regulated Decision Workflows, Healthcare Claims Adjudication, Enterprise AI Architecture, Auditability And Compliance, Policy-Aware AI Systems, Human-In-The-Loop AI, Explainable AI (XAI), Operational ResilienceAbstract
Regulated enterprise decision workflows, particularly in healthcare claims adjudication, operate under strict requirements for auditability, policy compliance, and operational reliability. Despite increasing adoption of automation and artificial intelligence, most decision pipelines remain vulnerable to data quality issues, policy interpretation errors, and manual exception handling, leading to costly downstream effects such as claim rework and appeal backlogs. Traditional rule-based systems and isolated machine learning models lack the adaptability and contextual reasoning needed to address these challenges in real time. This paper introduces a self-healing generative AI framework designed for regulated decision workflows, combining large language models with workflow state monitoring, policy constraints, and governance controls. The proposed architecture continuously observes decision execution, detects semantic and procedural anomalies, and generates corrective recommendations while preserving full audit trails and human oversight. Rather than replacing existing adjudication logic, the framework augments enterprise workflows with explainable, policy-aware reasoning that enables safe and controlled remediation. A healthcare claims adjudication use case is presented to demonstrate how the framework identifies common processing failures—such as missing documentation, rule misapplication, and classification inconsistencies—and supports faster resolution without compromising regulatory requirements. The paper discusses operational outcomes including improved error containment, reduced manual intervention, and enhanced audit readiness. While evaluated in a healthcare context, the framework is intentionally designed to generalize across other regulated domains, including insurance underwriting, financial decision pipelines, and public-sector benefit administration, highlighting its broader applicability and industry relevance.
References
1. Vanama, S. K. R. (2023). Integrating Site Reliability Engineering SRE Principles into Enterprise Architecture for Predictive Resilience. International Journal of Emerging Trends in Computer Science and Information Technology, 4(3), 164-170. https://doi.org/10.63282/3050-9246.IJETCSIT-V4I3P117
2. Cabello, L., Bugliarello, E., Brandl, S., & Elliott, D. (2023). Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 8465–8483). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.emnlp-main.525
3. Ferrara, E. (2023). Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. https://doi.org/10.3390/sci6010003
4. Li, Y., Du, M., Song, R., Wang, X., & Wang, Y. (2023). A Survey on Fairness in Large Language Models. https://doi.org/10.48550/arXiv.2308.
5. Ubale, A. (2023). Beyond Telematics: Leveraging Generative AI for Synthetic Accident Reconstruction and Liability Attribution in Autonomous Vehicle Claims. International Journal of AI, BigData, Computational and Management Studies, 4(4), 119-124. https://doi.org/10.63282/3050-9416.IJAIBDCMS-V4I4P113
6. Deng, S., Zhao, H., Huang, B., Zhang, C., Chen, F., Deng, Y., Yin, J., Dustdar, S., & Zomaya, A. Y. (2023). Cloud-native computing: A survey from the perspective of services. arXiv. https://doi.org/10.48550/arXiv.2306.14402