Applying Interpretable AI Techniques to Capital and Regulatory Reporting for Supervisory Transparency

Authors

  • Laxmi Naga Durga Pandrapragada Independent Researcher, California, United States. Author

DOI:

https://doi.org/10.63282/3050-9416.ICAIDSCT26-136

Keywords:

Audit defensibility, capital reporting, explainable AI, interpretable AI, model governance, regulatory reporting, supervisory transparency

Abstract

Capital and regulatory reporting in large banking institutions operates under supervisory expectations that demand reporting outcomes be accurate, transparent, traceable, and defensible under audit and examination. Although artificial intelligence (AI) can improve efficiency and analytic depth, black-box models create supervisory risk when they cannot be clearly explained, reproduced, and governed. This paper proposes a governance-aligned interpretable AI framework tailored for capital and regulatory reporting. The framework integrates controlled data preparation, policy-aligned regulatory logic, interpretable analytical techniques, and end-to-end governance artifacts across the reporting lifecycle. The approach enables responsible adoption of AI by preserving auditability, supporting independent validation, and aligning analytics with supervisory intent.

References

1. G. Basel Committee on Banking Supervision, “Principles for effective risk data aggregation and risk reporting (BCBS 239),” Bank for International Settlements, 2013. [Link]

2. Federal Reserve Board, “Supervisory Guidance on Model Risk Management (SR 11-7),” 2011. [Link]

3. C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nature Machine Intelligence, 2019. [Link]

4. F. Doshi-Velez and B. Kim, “Towards a rigorous science of interpretable machine learning,” arXiv preprint, 2017. [Link]

5. S. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” Advances in Neural Information Processing Systems, 2017. [Link]

6. European Banking Authority, “Guidelines on ICT and security risk management,” 2019. [Link]

Downloads

Published

2026-02-17

How to Cite

1.
Durga Pandrapragada LN. Applying Interpretable AI Techniques to Capital and Regulatory Reporting for Supervisory Transparency. IJAIBDCMS [Internet]. 2026 Feb. 17 [cited 2026 Mar. 12];:309-11. Available from: https://ijaibdcms.org/index.php/ijaibdcms/article/view/426