A Strategic approach - Enterprise-Wide Cyber Security Quantification via Standardized Questionnaires and Risk Modeling impacting financial sectors globally
DOI:
https://doi.org/10.63282/3050-9416.IJAIBDCMS-V3I1P110Keywords:
Enterprise-wide Cyber Security, quantitative risk analysis, standardized security questionnaires, Bayesian networks, generalized linear models (GLM), Monte-Carlo simulation; FAIR framework, control ontology, OWASP SAMM, CVSS, ISO/IEC 27001, NIST Cybersecurity Framework, PCI DSS, MITRE ATT&CK, financial services security, risk modeling, expected annual loss (EAL), value-at-risk (VaR), security capital efficiency (SCE), software supply chain risk, SBOM, third-party risk management, operational resilienceAbstract
Banks and credit unions are more and more in the business of delivering value through software-intensive products – such as mobile apps, open banking APIs, payments engines, analytics platforms and partner-embedded services - yet lack a defensible, enterprise-wide way to compare their security posture; prioritize scarce remediation budgets; and express residual risk in an economic language that is palatable to boards of directors or regulators. In this paper, we offer a strategic framework for enterprise-level quantification of Cyber Security by combining structured questionnaires for comparison with risk modelling to produce decision-grade, comparable metrics across global portfolios. We develop a canonical control ontology across governance, identity, data protection, application security, vulnerability management, cloud/container hardening, secrets and key management, monitoring and response, third-party and open-source risk, and finally resilience. Variable sets of questions (SIG, CAIQ, internal SDL forms)-To be unified into canonical items mapped against established frameworks (NIST CSF, ISO/IEC 27001/27005, PCI DSS, OWASP SAMM), styled boards alongside ADP; they would use the same sharing rules as those already defined for SDL artifacts. What are some challenges we should address? Every article abstracts control strength, scope, and evidence quality (policy, manual proof, automated attestation, independent validation) such that a normalized machine-readable evidence layer is established. Based on this, we introduce a two-level modelling stack. Tier-1 encodes the influence of control states on latent exposure according to: initial-compromise, privilege escalation, data-exfiltration, and service-disruption, given product-context (internet-exposure, user-base, and regulatory-sensitivity).
Second-tier maps exposure into loss distributions using regularized model techniques: incidence probability via a generalized linear modelling technique and mixture severity (Lognormal/Pareto/gamma) models, which are integrated by Monte Carlo simulation to produce Expected Annual Loss (EAL) 2 and Value-at-Risk (VaR). We also calculate marginal risk reduction and a Security Capital Efficiency (SCE) metric that measures expected loss decrease per unit spend, allowing budget optimization across a portfolio and “next-best control” suggestions. The methodology bakes in model-risk governance documentation, calibration, challenger models, back testing, and audit trails from outputs to evidence artifactsto support transparency and facilitate regulatory conversation. On a representative multi-product dataset, the approach provides consistent stable rank-ordering of product risk, improved calibration in comparison to qualitative heat-map baselines, and materially higher expected loss savings through SCE-guided reallocation under fixed budgets. More than just a quantitative gain, standardized questionnaires remove assessment friction, increase evidence quality (in Favor of automated attestations), and orient remediation with measurable business results. The outcome is a repeatable framework that shifts financial services Cyber Security from maturity stories to defensible, comparable, and financially grounded risk numbers, enabling board oversight, supervisory engagement, and scalable security investment across the enterprise
References
1. ISO/IEC 27001:2013, Information technologySecurity techniquesInformation security management systemsRequirements, International Organization for Standardization, 2013 (with Cor. amendments 2014/2015).
2. ISO/IEC 27005:2018, Information technologySecurity techniquesInformation security risk management, International Organization for Standardization, 2018.
3. NIST, Framework for Improving Critical Infrastructure Cybersecurity, Version 1.1, National Institute of Standards and Technology, Apr. 2018.
4. NIST, Security and Privacy Controls for Information Systems and Organizations, NIST Special Publication 800-53, Rev. 5, Sept. 2020.
5. NIST, Guide for Conducting Risk Assessments, NIST Special Publication 800-30, Rev. 1, Sept. 2012.
6. OWASP, Software Assurance Maturity Model (SAMM) v2.0, Open Web Application Security Project, 2020.
7. OWASP, OWASP Top 10 – 2021: The Ten Most Critical Web Application Security Risks, Open Web Application Security Project, 2021.
8. FIRST, Common Vulnerability Scoring System v3.1: Specification Document, Forum of Incident Response and Security Teams, 2019.
9. MITRE, Common Weakness Enumeration (CWE) Overview, The MITRE Corporation, revs. Through 2021.
10. MITRE, ATT&CK® Knowledge Base, The MITRE Corporation, versions through 2021.
11. PCI Security Standards Council, Payment Card Industry Data Security Standard, v3.2.1, May 2018.
12. FFIEC, Cybersecurity Assessment Tool, Federal Financial Institutions Examination Council, updates through 2020.
13. European Banking Authority, Guidelines on ICT and Security Risk Management, Nov. 2019.
14. Monetary Authority of Singapore, Technology Risk Management Guidelines, Jan. 2021.
15. European Union, General Data Protection Regulation, Regulation (EU) 2016/679, adopted 2016; enforcement guidance through 2021.
16. CPMI-IOSCO, Guidance on cyber resilience for financial market infrastructures, June 2016; FAQs and supervisory commentary through 2021.
17. J. Jones, An Introduction to Factor Analysis of Information Risk (FAIR), Risk Management Insight, 2012; FAIR Institute collateral through 2021.
18. T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed., Springer, 2009; applications to cyber risk through 2021.
19. F. Jensen and T. Nielsen, Bayesian Networks and Decision Graphs, 2nd ed., Springer, 2007; applied to security risk contexts through 2021.
20. Basel Committee on Banking Supervision, Sound Practices for the Management and Supervision of Operational Risk, Bank for International Settlements, 2011.
21. Shared Assessments, Standardized Information Gathering (SIG) Questionnaire, 2021 edition.
22. Cloud Security Alliance, Consensus Assessments Initiative Questionnaire (CAIQ), versions v3/v4, 2021.
23. Vendor Security Alliance (VSA), VSA Questionnaire, 2018–2021 releases.
24. U.S. NTIA, The Minimum Elements for a Software Bill of Materials (SBOM), July 2021.
25. SLSA (Supply-chain Levels for Software Artifacts), Provenance and Levels Documentation, 2021.
26. Board of Governors of the Federal Reserve System & OCC, Supervisory Guidance on Model Risk Management (SR 11-7), Apr. 2011; industry application to cyber risk through 2021.
27. ISO/IEC 27034, Information technologySecurity techniquesApplication Security, multi-part standard, 2011–2018.
28. NIST, Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations, NIST SP 800-171, Rev. 2, Feb. 2020.
29. ENISA, Threat Landscape 2021, European Union Agency for Cybersecurity, Oct. 2021.
30. COSO, Enterprise Risk ManagementIntegrating with Strategy and Performance, Committee of Sponsoring Organizations, 2017.
31. U.S. SEC, Commission Statement and Guidance on Public Company Cybersecurity Disclosures, Feb. 2018.
32. NIST, Key Practices in Cyber Supply Chain Risk Management: Observations from Industry, NIST IR 8276, Feb. 2021.
33. ISO/IEC 27017:2015, Information technologySecurity techniquesCode of practice for information security controls based on ISO/IEC 27002 for cloud services, 2015.
34. ISO/IEC 27018:2019, Information technologySecurity techniquesCode of practice for protection of personally identifiable information (PII) in public clouds, 2019.
35. U.S. CISA, Known Exploited Vulnerabilities (KEV) Catalog, first published 2021.
36. OWASP, Application Security Verification Standard (ASVS) v4.0.3, 2020.
37. NIST, Computer Security Incident Handling Guide, NIST SP 800-61, Rev. 2, Aug. 2012; updates through 2021.
38. NIST, Application Container Security Guide, NIST SP 800-190, Sept. 2017; adoption through 2021.
39. ISO 22301:2019, Security and resilienceBusiness continuity management systemsRequirements, International Organization for Standardization, 2019.
40. NIST, Technical Guide to Information Security Testing and Assessment, NIST SP 800-115, Sept. 2008; cited in practice through 2021.
41. OWASP, Dependency-Track and Component Analysis Practices, community documentation, 2021.
42. Mohanarajesh Kommineni. Revanth Parvathi. (2013) Risk Analysis for Exploring the Opportunities in Cloud Outsourcing.