Applying Interpretable AI Techniques to Capital and Regulatory Reporting for Supervisory Transparency
DOI:
https://doi.org/10.63282/3050-9416.ICAIDSCT26-136Keywords:
Audit defensibility, capital reporting, explainable AI, interpretable AI, model governance, regulatory reporting, supervisory transparencyAbstract
Capital and regulatory reporting in large banking institutions operates under supervisory expectations that demand reporting outcomes be accurate, transparent, traceable, and defensible under audit and examination. Although artificial intelligence (AI) can improve efficiency and analytic depth, black-box models create supervisory risk when they cannot be clearly explained, reproduced, and governed. This paper proposes a governance-aligned interpretable AI framework tailored for capital and regulatory reporting. The framework integrates controlled data preparation, policy-aligned regulatory logic, interpretable analytical techniques, and end-to-end governance artifacts across the reporting lifecycle. The approach enables responsible adoption of AI by preserving auditability, supporting independent validation, and aligning analytics with supervisory intent.
References
1. G. Basel Committee on Banking Supervision, “Principles for effective risk data aggregation and risk reporting (BCBS 239),” Bank for International Settlements, 2013. BisPrinciples for effective risk data aggregation and risk reporting
2. Federal Reserve Board, “Supervisory Guidance on Model Risk Management (SR 11-7),” 2011. FederalreserveThe Fed - Supervisory Letter SR 11-7 on guidance on Model Risk Management -- April 4, 2011
3. C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nature Machine Intelligence, 2019. NatureStop explaining black box machine learning models for high stakes decisions and use interpretable models instead
4. F. Doshi-Velez and B. Kim, “Towards a rigorous science of interpretable machine learning,” arXiv preprint, 2017. ArxivTowards A Rigorous Science of Interpretable Machine Learning
5. S. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” Advances in Neural Information Processing Systems, 2017. NeuripsA Unified Approach to Interpreting Model Predictions
6. European Banking Authority, “Guidelines on ICT and security risk management,” 2019. EuropaGuidelines on ICT and security risk management | European Banking Authority