Preventing Discriminatory Risk Assessment: A Bias Detection Framework for LLM-Powered Insurance Decision Support

Authors

  • Rama Krishna Kumar Lingamgunta IT Principal, AI center of enablement, Cigna Evernorth Services Inc, Raleigh, North Carolina, USA. Author

DOI:

https://doi.org/10.63282/3050-9416.IJAIBDCMS-V5I2P117

Keywords:

Generative AI, Large Language Models (Llms), Bias Detection, Discriminatory Risk Assessment, Insurance Decision Support, Underwriting Assistants, Ethical AI, Fairness And Transparency, Human-In-The-Loop AI, AI Governance, Regulated Insurance Systems

Abstract

The increasing adoption of large language models (LLMs) in insurance underwriting and risk assessment has introduced new forms of algorithmic bias that are not adequately addressed by traditional fairness evaluation techniques. Unlike conventional predictive models, LLM-powered decision support systems reason over unstructured documentation, policy language, and contextual narratives, creating additional pathways for both direct and proxy-based discrimination. In regulated insurance environments, such bias poses significant ethical, legal, and regulatory risks, particularly when AI systems influence high-impact financial decisions. This paper proposes a bias detection framework for LLM-powered insurance decision support systems designed to prevent discriminatory risk assessment while preserving human oversight and auditability. The framework continuously monitors model interactions and decision context to identify bias signals arising from protected attributes, proxy indicators, documentation asymmetry, and inconsistent reasoning patterns. Bias detection is achieved through a combination of prompt instrumentation, contextual feature analysis, counterfactual evaluation, and policy-aligned constraints that operate alongside existing underwriting workflows. Rather than enabling autonomous decision-making, the framework treats LLMs as assistive reasoning components whose outputs are evaluated for fairness risk before informing human judgment. Representative underwriting use cases demonstrate how the framework surfaces biased reasoning, supports corrective intervention, and reduces downstream risk of unfair outcomes. The results indicate improved transparency, bias containment, and regulatory readiness without compromising operational efficiency. While evaluated in an insurance underwriting context, the proposed framework generalizes to other regulated decision domains where generative AI systems influence consequential human decisions.

References

1. Vanama, S. K. R. (2023). Integrating Site Reliability Engineering SRE Principles into Enterprise Architecture for Predictive Resilience. International Journal of Emerging Trends in Computer Science and Information Technology, 4(3), 164-170. https://doi.org/10.63282/3050-9246.IJETCSIT-V4I3P117

2. Cabello, L., Bugliarello, E., Brandl, S., & Elliott, D. (2023). Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 8465–8483). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.emnlp-main.525

3. Ferrara, E. (2023). Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. https://doi.org/10.3390/sci6010003

4. Li, Y., Du, M., Song, R., Wang, X., & Wang, Y. (2023). A Survey on Fairness in Large Language Models. https://doi.org/10.48550/arXiv.2308.

5. Ubale, A. (2023). Beyond Telematics: Leveraging Generative AI for Synthetic Accident Reconstruction and Liability Attribution in Autonomous Vehicle Claims. International Journal of AI, BigData, Computational and Management Studies, 4(4), 119-124. https://doi.org/10.63282/3050-9416.IJAIBDCMS-V4I4P113

6. Deng, S., Zhao, H., Huang, B., Zhang, C., Chen, F., Deng, Y., Yin, J., Dustdar, S., & Zomaya, A. Y. (2023). Cloud-native computing: A survey from the perspective of services. arXiv. https://doi.org/10.48550/arXiv.2306.14402

Downloads

Published

2024-06-30

Issue

Section

Articles

How to Cite

1.
Lingamgunta RKK. Preventing Discriminatory Risk Assessment: A Bias Detection Framework for LLM-Powered Insurance Decision Support. IJAIBDCMS [Internet]. 2024 Jun. 30 [cited 2026 Mar. 15];5(2):173-9. Available from: https://ijaibdcms.org/index.php/ijaibdcms/article/view/361