Policy-Aware Secure Data Governance in Distributed Information Systems Using Explainable AI Models

Authors

  • Srinivas Potluri Director EGS Global Services. Author

DOI:

https://doi.org/10.63282/3050-9416.IJAIBDCMS-V6I3P101

Keywords:

Distributed Information Systems, Explainable AI (XAI), Federated Learning, Blockchain, Data Integrity

Abstract

In the modern digital age, distributed information systems have become the key infrastructure in terms of storing and sharing data as well as processing. These systems are cross-administrative and pose considerable problems to the data government that is secure, hence complies with both the organizational and regulatory policies. Such systems are decentralized, which is why these systems create additional complications associated with data privacy, security, trust, and compliance. To overcome these difficulties, we plan to introduce a multifaceted solution to policy-aware secure data governance, leveraging Explainable Artificial Intelligence (XAI) models. Our solution is based on combining dynamic policy enforcement approaches with security rules and XAI methods aimed at improving the transparency, responsibility, and explainability of security decisions. The proposed system allows organizations to know how to maintain governance over data in distributed environments securely, and gives humans a reasonable explanation of the policy violation and access control decisions. The architecture proposed in the paper comprises the following layers involved in policy definition, policy enforcement, data auditing, and the interpretation of decisions based on XAI. We develop and deploy an explainable decision engine to production, built on SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations), to demonstrate how the privilege of accessing particular data is granted or denied. Besides, through it, our methodology uses federated learning and blockchain to preserve the integrity and provenance of the data over decentralized nodes. An extensive literature review is carried out to show the available gaps in secure data governance and explainable AI. A well-developed experimental setup and a miscellany of case studies show the efficiency of our method to enhance policy compliance, reduce illegal attempts to enter the system, and instil trust in stakeholders. In our findings, the governance process is becoming more rigorous and reputable in detecting policy violations since it has a high level of interpretability

References

[1] Jin, X., Krishnan, R., & Sandhu, R. (2012). A unified attribute-based access control model covering DAC, MAC and RBAC. In Data and Applications Security and Privacy XXVI: 26th Annual IFIP WG 11.3 Conference, DBSec 2012, Paris, France, July 11-13, 2012. Proceedings 26 (pp. 41-55). Springer Berlin Heidelberg.

[2] Kuhn, R., Coyne, E., & Weil, T. (2010). Adding attributes to role-based access control.

[3] Servos, D., & Osborn, S. L. (2017). Current Research and Open Problems in Attribute-Based Access Control ACM Computing Surveys (CSUR), 49(4), 1-45.

[4] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). " Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144).

[5] Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30.

[6] Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

[7] Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42.

[8] Capuano, N., Fenza, G., Loia, V., & Stanzione, C. (2022). Explainable artificial intelligence in cybersecurity: A survey. IEEE Access, 10, 93575-93600.

[9] Binns, R. (2018, January). Fairness in machine learning: Lessons from political philosophy. In Conference on fairness, accountability and transparency (pp. 149-159). PMLR.

[10] Hardjono, T., Shrier, D. L., & Pentland, A. (Eds.). (2019). Trusted Data, revised and expanded edition: A New Framework for Identity and Data Sharing. MIT Press.

[11] Zyskind, G., & Nathan, O. (2015, May). Decentralising privacy: Using blockchain to protect personal data. In 2015 IEEE security and privacy workshops (pp. 180-184). IEEE.

[12] Wang, Y., Zhang, A., Zhang, J., & Yang, Y. (2021). Blockchain-based data integrity verification and sharing in cloud environments: A survey. IEEE Access, 9, 92968–92989.

[13] Zhang, C., Xu, Y., Hu, Y., Wu, J., Ren, J., & Zhang, Y. (2021). A blockchain-based multi-cloud storage data auditing scheme to locate faults. IEEE Transactions on Cloud Computing, 10(4), 2252-2263.

[14] Panigutti, C., Hamon, R., Hupont, I., Fernandez Llorca, D., Fano Yela, D., Junklewitz, H., ... & Gomez, E. (2023, June). The role of explainable AI in the context of the AI Act. In Proceedings of the 2023 ACM conference on fairness, accountability, and transparency (pp. 1139-1150).

[15] Duzha, A., Alexakis, E., Kyriazis, D., Sahi, L. F., & Kandi, M. A. (2023, August). From Data Governance by design to Data Governance as a Service: A transformative human-centric data governance framework. In Proceedings of the 2023 7th International Conference on Cloud and Big Data Computing (pp. 10-20).

[16] Janssen, M., Brous, P., Estevez, E., Barbosa, L. S., & Janowski, T. (2020). Data governance: Organising data for trustworthy Artificial Intelligence. Government information quarterly, 37(3), 101493.

[17] Tan, Y. S., Ko, R. K., & Holmes, G. (2013, November). Security and data accountability in distributed systems: A provenance survey. In 2013 IEEE 10th International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing (pp. 1571-1578). IEEE.

[18] Zhang, Z., Al Hamadi, H., Damiani, E., Yeun, C. Y., & Taher, F. (2022). Explainable artificial intelligence applications in cybersecurity: State-of-the-art in research. IEE Access, 10, 93104-93139.

[19] Kuppa, A., & Le-Khac, N. A. (2020, July). Black-box attacks on explainable artificial intelligence (XAI) methods in cybersecurity. In 2020 International Joint Conference on Neural Networks (IJCNN) (pp. 1-8). IEEE.

[20] Beltrán, E. T. M., Pérez, M. Q., Sánchez, P. M. S., Bernal, S. L., Bovet, G., Pérez, M. G., ... & Celdrán, A. H. (2023). Decentralised federated learning: Fundamentals, state of the art, frameworks, trends, and challenges. IEEE Communications Surveys & Tutorials, 25(4), 2983-3013.

[21] Ali, V., Norman, A. A., & Azzuhri, S. R. B. (2023). Characteristics of blockchain and its relationship with trust. IEEE Access, 11, 15364-15374.

Downloads

Published

2025-07-03

Issue

Section

Articles

How to Cite

1.
Potluri S. Policy-Aware Secure Data Governance in Distributed Information Systems Using Explainable AI Models. IJAIBDCMS [Internet]. 2025 Jul. 3 [cited 2025 Oct. 25];6(3):1-10. Available from: https://ijaibdcms.org/index.php/ijaibdcms/article/view/194