{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,2]],"date-time":"2026-03-02T17:13:53Z","timestamp":1772471633300,"version":"3.50.1"},"reference-count":42,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2026,3,2]],"date-time":"2026-03-02T00:00:00Z","timestamp":1772409600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["JCP"],"abstract":"<jats:p>High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing approaches often treat these concerns in isolation as follows: Explainable Artificial Intelligence (XAI) methods are added ad hoc to machine learning pipelines, while governance and regulatory frameworks remain largely conceptual and weakly connected to the concrete artefacts produced in practice. This article proposes XAI-Compliance-by-Design, a modular framework that integrates XAI techniques, compliance-by-design principles and trustworthy Machine Learning Operations (MLOps) practices into a unified architecture for high-risk AI systems in cybersecurity and privacy domains. The framework follows a dual-flow design that couples an upstream technical pipeline (data, model, explanation, and monitoring) with a downstream governance pipeline (policy, oversight, audit, and decision-making), orchestrated by a Compliance-by-Design Engine and a technical\u2013regulatory correspondence matrix aligned with the GDPR, the AI Act, and ISO\/IEC 42001. The framework is instantiated and evaluated through an end-to-end, Python-based proof of concept using a synthetic, intrusion detection system (IDS)-inspired anomaly detection scenario with a Random Forest (RF) classifier, Shapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), drift indicators, and tamper-evident evidence bundles and decision dossiers. The results show that, even in a modest, toy setting, the framework systematically produces verifiable artefacts that support auditability and accountability across the model lifecycle. By linking explanation reports, drift statistics and compliance logs to concrete regulatory provisions, the approach illustrates how organisations operating high-risk AI for cybersecurity and privacy can move from model-centric optimisation to evidence-centric governance. The article discusses how the proposed framework can be generalised to real-world high-risk AI applications, contributing to the operationalisation of European digital sovereignty in AI governance. This article does not introduce a new intrusion detection algorithm; instead, it proposes an evidence-centric governance pipeline that captures decision provenance and compliance artefacts so that decisions can be audited and justified against regulatory obligations.<\/jats:p>","DOI":"10.3390\/jcp6020043","type":"journal-article","created":{"date-parts":[[2026,3,2]],"date-time":"2026-03-02T16:06:59Z","timestamp":1772467619000},"page":"43","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["XAI-Compliance-by-Design: A Modular Framework for GDPR- and AI Act-Aligned Decision Transparency in High-Risk AI Systems"],"prefix":"10.3390","volume":"6","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6091-2624","authenticated-orcid":false,"given":"Antonio","family":"Goncalves","sequence":"first","affiliation":[{"name":"Naval Research Center (CINAV), Portuguese Naval Academy, Military University Institute, Lisbon Naval Base, 2810-001 Almada, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7248-4310","authenticated-orcid":false,"given":"Anacleto","family":"Correia","sequence":"additional","affiliation":[{"name":"Naval Research Center (CINAV), Portuguese Naval Academy, Military University Institute, Lisbon Naval Base, 2810-001 Almada, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2026,3,2]]},"reference":[{"key":"#cr-split#-ref_1.1","unstructured":"European Union (2016). Regulation"},{"key":"#cr-split#-ref_1.2","unstructured":"(EU) 2016\/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data (General Data Protection Regulation). Off. J. Eur. Union, L 119, 1-88."},{"key":"ref_2","unstructured":"European Parliament and Council of the European Union (2025, November 24). Regulation (EU) 2024\/1689 of the European Parliament and of the Council of 13 June 2024 Laying down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts (Artificial Intelligence Act). Off. J. Eur. Union 2024, OJ L, 2024\/1689, 12.7.2024. Available online: http:\/\/data.europa.eu\/eli\/reg\/2024\/1689\/oj."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"82","DOI":"10.1016\/j.inffus.2019.12.012","article-title":"Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI","volume":"58","author":"Bennetot","year":"2020","journal-title":"Inf. Fusion"},{"key":"ref_4","unstructured":"Doshi-Velez, F., and Kim, B. (2017). Towards a Rigorous Science of Interpretable Machine Learning. arXiv."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Ahangar, M.N., Jalali, S., and Dastjerdi, A. (2025). AI Trustworthiness in Manufacturing. Sensors, 25.","DOI":"10.3390\/s25144357"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"93","DOI":"10.1145\/3236009","article-title":"A Survey of Methods for Explaining Black Box Models","volume":"51","author":"Guidotti","year":"2019","journal-title":"ACM Comput. Surv."},{"key":"ref_7","unstructured":"Islam, M.A., Mridha, M.F., Jahin, M.A., and Dey, N. (2024). A Unified Framework for Evaluating the Effectiveness and Enhancing the Transparency of Explainable AI Methods in Real-World Applications. arXiv."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"3820","DOI":"10.1109\/TNNLS.2024.3357118","article-title":"Adaptive Sparse Memory Networks for Efficient and Robust Video Object Segmentation","volume":"36","author":"Dang","year":"2025","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Chhetri, T.R., Kurteva, A., DeLong, R.J., Hilscher, R., Korte, K., and Fensel, A. (2022). Data Protection by Design Tool for Automated GDPR Verification. Sensors, 22.","DOI":"10.3390\/s22072763"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Liao, Q.V., Gruen, D., and Miller, S. (2020). Questioning the AI: Informing Design Practices for Explainable AI User Experiences. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, ACM.","DOI":"10.1145\/3313831.3376590"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Kabir, M., Gandomi, A., and Wiese, A. (2025). A Review of Explainable Artificial Intelligence from the Perspectives of Challenges and Opportunities. Algorithms, 18.","DOI":"10.3390\/a18090556"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Kostopoulos, G., Davrazos, G., and Kotsiantis, S. (2024). Explainable Artificial Intelligence-Based Decision Support Systems. Electronics, 13.","DOI":"10.3390\/electronics13142842"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"102301","DOI":"10.1016\/j.inffus.2024.102301","article-title":"Explainable Artificial Intelligence (XAI) 2.0","volume":"106","author":"Longo","year":"2024","journal-title":"Inf. Fusion"},{"key":"ref_14","unstructured":"Pinto, J.D., and Paquette, L. (2024). Towards a Unified Framework for Evaluating Explanations. arXiv."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"293","DOI":"10.1080\/17579961.2024.2313795","article-title":"Unlocking the Black Box: Analysing the EU AI Act Framework","volume":"16","author":"Pavlidis","year":"2024","journal-title":"Law Innov. Technol."},{"key":"ref_16","unstructured":"(2023). Artificial Intelligence Management System. Standard No. ISO\/IEC 42001:2023. Available online: https:\/\/www.iso.org\/standard\/81230.html."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Lozano-Murcia, J., G\u00f3mez, R., and Blasco, L. (2025). Protocol for Evaluating Explainability in Actuarial Models. Electronics, 14.","DOI":"10.3390\/electronics14081561"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P., and Inkpen, K. (2019). Guidelines for Human\u2013AI Interaction. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, ACM.","DOI":"10.1145\/3290605.3300233"},{"key":"ref_19","unstructured":"Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., Chaudhary, V., Young, M., Crespo, J.F., and Dennison, D. (2015). Hidden Technical Debt in Machine Learning Systems. Proceedings of the Advances in Neural Information Processing Systems, Neural Information Processing Systems Foundation, Inc."},{"key":"ref_20","first-page":"753","article-title":"Multi-layered Governance for AI Systems","volume":"35","author":"Kieburtz","year":"2020","journal-title":"AI & Society"},{"key":"ref_21","first-page":"66","article-title":"A Taxonomy of MLOps","volume":"37","author":"Lwakatare","year":"2020","journal-title":"IEEE Softw."},{"key":"ref_22","unstructured":"AryaXAI (2025, November 24). Explainable AI: Enhancing Trust, Performance, and Regulatory Compliance. Available online: https:\/\/www.aryaxai.com\/article\/explainable-ai-enhancing-trust-performance-and-regulatory-compliance."},{"key":"ref_23","unstructured":"Alhena AI (2025, November 24). GDPR Compliance Through Multi-Region Architecture: An Engineering Deep Dive. Available online: https:\/\/alhena.ai\/blog\/gleen-ai-support-gdpr-compute-and-data-in-eu\/."},{"key":"ref_24","unstructured":"WilmerHale (2025, November 24). AI and GDPR: A Road Map to Compliance by Design\u2014Episode 1: The Planning Phase. Available online: https:\/\/www.wilmerhale.com\/en\/insights\/blogs\/wilmerhale-privacy-and-cybersecurity-law\/20250728-ai-and-gdpra-road-map-to-compliance-by-design-episode-1-the-planning-phase."},{"key":"ref_25","unstructured":"Exabeam (2025, November 24). The Intersection of GDPR and AI and 6 Compliance Best Practices. Available online: https:\/\/www.exabeam.com\/explainers\/gdpr-compliance\/the-intersection-of-gdpr-and-ai-and-6-compliance-best-practices\/."},{"key":"ref_26","unstructured":"Lundberg, S.M., and Lee, S. (2017). A Unified Approach to Interpreting Model Predictions. Proceedings of the Advances in Neural Information Processing Systems, Neural Information Processing Systems Foundation, Inc."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Ribeiro, M.T., Singh, S., and Guestrin, C. (2016). \u201c\u1e84hy Should I Trust You?\u201d: Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM.","DOI":"10.1145\/2939672.2939778"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Carvalho, D.V., Pereira, E.M., and Cardoso, J.S. (2019). Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics, 8.","DOI":"10.3390\/electronics8080832"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"4793","DOI":"10.1109\/TNNLS.2020.3027314","article-title":"A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI","volume":"32","author":"Tjoa","year":"2021","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_30","unstructured":"Centre for Data Ethics and Innovation (2025, November 24). The Roadmap to an Effective AI Assurance Ecosystem, Available online: https:\/\/www.gov.uk\/government\/publications\/the-roadmap-to-an-effective-ai-assurance-ecosystem."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Tabassi, E. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0), National Institute of Standards and Technology. Technical Report NIST AI 100-1.","DOI":"10.6028\/NIST.AI.100-1"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"3617","DOI":"10.1007\/s43681-024-00595-3","article-title":"Addressing the regulatory gap: Moving towards an EU AI audit ecosystem beyond the AI Act by including civil society","volume":"5","author":"Hartmann","year":"2025","journal-title":"AI Ethics"},{"key":"ref_33","unstructured":"Bass, L., Clements, P., and Kazman, R. (2012). Software Architecture in Practice, Addison-Wesley. [3rd ed.]."},{"key":"ref_34","unstructured":"Garlan, D., and Shaw, M. (1996). Software Architecture: Perspectives on an Emerging Discipline, Prentice Hall."},{"key":"ref_35","unstructured":"Taylor, R.N., Medvidovi\u0107, N., and Dashofy, E.M. (2009). Software Architecture: Foundations, Theory, and Practice, Wiley."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"101885","DOI":"10.1016\/j.jsis.2024.101885","article-title":"Responsible Artificial Intelligence Governance: A Review and Research Framework","volume":"34","author":"Papagiannidis","year":"2025","journal-title":"J. Strateg. Inf. Syst."},{"key":"ref_37","first-page":"2141","article-title":"From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices","volume":"26","author":"Morley","year":"2021","journal-title":"AI Soc."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Phillips, P., Hahn, C., Fontana, P., Broniatowski, D.A., and Przybocki, M.A. (2021). Four Principles of Explainable Artificial Intelligence, National Institute of Standards and Technology. NIST Interagency Report (NISTIR) 8312.","DOI":"10.6028\/NIST.IR.8312"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Tran, T.A., Ruppert, T., and Abonyi, J. (2024). The Use of eXplainable Artificial Intelligence and Machine Learning Operation Principles to Support the Continuous Development of Machine Learning-Based Solutions in Fault Detection and Identification. Computers, 13.","DOI":"10.3390\/computers13100252"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Umer, M.A., Belay, E.G., and Gouveia, L.B. (2024). Leveraging Artificial Intelligence and Provenance Blockchain Framework to Mitigate Risks in Cloud Manufacturing in Industry 4.0. Electronics, 13.","DOI":"10.3390\/electronics13030660"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Kulothungan, V. (2025). Using Blockchain Ledgers to Record AI Decisions in IoT. IoT, 6.","DOI":"10.3390\/iot6030037"}],"container-title":["Journal of Cybersecurity and Privacy"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2624-800X\/6\/2\/43\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,2]],"date-time":"2026-03-02T16:45:44Z","timestamp":1772469944000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2624-800X\/6\/2\/43"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,2]]},"references-count":42,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2026,4]]}},"alternative-id":["jcp6020043"],"URL":"https:\/\/doi.org\/10.3390\/jcp6020043","relation":{},"ISSN":["2624-800X"],"issn-type":[{"value":"2624-800X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,3,2]]}}}