{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,8]],"date-time":"2026-04-08T15:47:15Z","timestamp":1775663235808,"version":"3.50.1"},"reference-count":157,"publisher":"MDPI AG","issue":"6","license":[{"start":{"date-parts":[[2025,5,23]],"date-time":"2025-05-23T00:00:00Z","timestamp":1747958400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100004837","name":"MCIN\/AEI 10.13039\/501100011033 and the European Union","doi-asserted-by":"publisher","award":["PID2021-124335OB-C22"],"award-info":[{"award-number":["PID2021-124335OB-C22"]}],"id":[{"id":"10.13039\/501100004837","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Algorithms"],"abstract":"<jats:p>The lack of transparency in many AI systems continues to hinder their adoption in critical domains such as healthcare, finance, and autonomous systems. While recent explainable AI (XAI) methods\u2014particularly those leveraging large language models\u2014have enhanced output readability, they often lack traceable and verifiable reasoning that is aligned with domain-specific logic. This paper presents Nomological Deductive Reasoning (NDR), supported by Nomological Deductive Knowledge Representation (NDKR), as a framework aimed at improving the transparency and auditability of AI decisions through the integration of formal logic and structured domain knowledge. NDR enables the generation of causal, rule-based explanations by validating statistical predictions against symbolic domain constraints. The framework is evaluated on a credit-risk classification task using the Statlog (German Credit Data) dataset, demonstrating that NDR can produce coherent and interpretable explanations consistent with expert-defined logic. While primarily focused on technical integration and deductive validation, the approach lays a foundation for more transparent and norm-compliant AI systems. This work contributes to the growing formalization of XAI by aligning statistical inference with symbolic reasoning, offering a pathway toward more interpretable and verifiable AI decision-making processes.<\/jats:p>","DOI":"10.3390\/a18060306","type":"journal-article","created":{"date-parts":[[2025,5,23]],"date-time":"2025-05-23T09:58:09Z","timestamp":1747994289000},"page":"306","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Nomological Deductive Reasoning for Trustworthy, Human-Readable, and Actionable AI Outputs"],"prefix":"10.3390","volume":"18","author":[{"ORCID":"https:\/\/orcid.org\/0009-0006-0195-5919","authenticated-orcid":false,"given":"Gedeon","family":"Hakizimana","sequence":"first","affiliation":[{"name":"Department of Computer Science & Engineering, Universidad Carlos III de Madrid, 28911 Leganes, Spain"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0041-6829","authenticated-orcid":false,"given":"Agapito","family":"Ledezma Espino","sequence":"additional","affiliation":[{"name":"Department of Computer Science & Engineering, Universidad Carlos III de Madrid, 28911 Leganes, Spain"}]}],"member":"1968","published-online":{"date-parts":[[2025,5,23]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"52138","DOI":"10.1109\/ACCESS.2018.2870052","article-title":"Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)","volume":"6","author":"Adadi","year":"2018","journal-title":"IEEE Access"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information, Harvard University Press.","DOI":"10.4159\/harvard.9780674736061"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"265","DOI":"10.1007\/s13347-019-00382-7","article-title":"Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence","volume":"34","author":"Zednik","year":"2021","journal-title":"Philos. Technol."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"45","DOI":"10.1007\/s12559-023-10179-8","article-title":"Interpreting Black Box Models: A Review on Explainable Artificial Intelligence","volume":"16","author":"Hassija","year":"2024","journal-title":"Cogn. Comput."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"963","DOI":"10.1007\/s43681-022-00217-w","article-title":"Explainable AI Lacks Regulative Reasons: Why AI and Human Decision-Making Are Not Equally Opaque","volume":"3","author":"Peters","year":"2023","journal-title":"AI Ethics"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"3503","DOI":"10.1007\/s10462-021-10088-y","article-title":"Explainable Artificial Intelligence: A Comprehensive Review","volume":"55","author":"Minh","year":"2022","journal-title":"Artif. Intell. Rev."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"281","DOI":"10.1016\/j.future.2022.03.009","article-title":"The Explainability Paradox: Challenges for xAI in Digital Pathology","volume":"133","author":"Evans","year":"2022","journal-title":"Fut. Genet. Comput. Syst."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"216","DOI":"10.1007\/s10462-024-10854-8","article-title":"Explainable Artificial Intelligence (XAI) in Finance: A Systematic Literature Review","volume":"57","year":"2024","journal-title":"Artif. Intell. Rev."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"867","DOI":"10.1007\/s11301-023-00320-0","article-title":"Applications of Explainable Artificial Intelligence in Finance\u2014A Systematic Review of Finance, Information Systems, and Computer Science Literature","volume":"74","author":"Weber","year":"2024","journal-title":"Manag. Rev. Q."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"101603","DOI":"10.1109\/ACCESS.2024.3431437","article-title":"Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions","volume":"12","author":"Atakishiyev","year":"2024","journal-title":"IEEE Access"},{"key":"ref_11","unstructured":"European Union (2025, January 25). General Data Protection Regulation (GDPR). Available online: https:\/\/gdpr.eu\/."},{"key":"ref_12","unstructured":"OECD (2025, January 25). OECD Principles on Artificial Intelligence. Available online: https:\/\/www.oecd.org\/going-digital\/ai\/principles\/."},{"key":"ref_13","unstructured":"European Commission (2025, January 25). Artificial Intelligence Act. Available online: https:\/\/digital-strategy.ec.europa.eu\/en\/policies\/regulatory-framework-ai."},{"key":"ref_14","unstructured":"State Council of China (2025, January 25). Ethical Guidelines for Artificial Intelligence, Available online: https:\/\/www.gov.cn\/zhengce\/2022-01-01\/."},{"key":"ref_15","unstructured":"Ministry of Internal Affairs and Communications of Japan (2025, January 25). AI R&D Guidelines, Available online: https:\/\/www.soumu.go.jp\/main_content\/000730466.pdf."},{"key":"ref_16","unstructured":"Australian Government (2025, January 25). AI Ethics Framework, Available online: https:\/\/www.industry.gov.au\/data-and-publications\/australias-artificial-intelligence-ethics-framework."},{"key":"ref_17","unstructured":"South African Government (2025, January 25). South Africa\u2019s Artificial Intelligence Policy, Available online: https:\/\/www.dcdt.gov.za\/sa-national-ai-policy-framework\/file\/338-sa-national-ai-policy-framework.html."},{"key":"ref_18","unstructured":"Nigerian Government (2025, January 25). Nigeria\u2019s Artificial Intelligence Policy, Available online: https:\/\/www.nitda.gov.ng."},{"key":"ref_19","unstructured":"Rwanda Government (2025, January 25). Rwanda\u2019s AI Policy, Available online: https:\/\/www.minict.gov.rw\/index.php?eID=dumpFile&t=f&f=67550&token=6195a53203e197efa47592f40ff4aaf24579640e."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Ribeiro, M.T., Singh, S., and Guestrin, C. (2016, January 13\u201317). \u201cWhy should I trust you?\u201d Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA.","DOI":"10.1145\/2939672.2939778"},{"key":"ref_21","unstructured":"Lundberg, S.M., and Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. Adv. Neural Inf. Process. Syst., 30."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Ribeiro, M.T., Singh, S., and Guestrin, C. (2024, December 18). Anchors: High-Precision Model-Agnostic Explanations. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (NeurIPS 2018); pp. 1525\u20131535. Available online: https:\/\/arxiv.org\/abs\/1802.07681.","DOI":"10.1609\/aaai.v32i1.11491"},{"key":"ref_23","unstructured":"Chen, J., Song, L., Wang, S., Xie, L., Wang, X., and Zhang, M. (2024, November 14). Towards Prototype-Based Explanations of Deep Neural Networks. In Proceedings of the 36th International Conference on Machine Learning (ICML 2019), 2019. Available online: https:\/\/arxiv.org\/abs\/1905.11742."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Ehsan, U., and Riedl, M. (2024, January 21\u201323). Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of Large Language Models. Proceedings of the Halfway to the Future Symposium, Santa Cruz, CA, USA.","DOI":"10.1145\/3686169.3686185"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"114104","DOI":"10.1016\/j.eswa.2020.114104","article-title":"Shapley-Lorenz Explainable Artificial Intelligence","volume":"167","author":"Giudici","year":"2021","journal-title":"Expert Syst. Appl."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"113941","DOI":"10.1016\/j.eswa.2020.113941","article-title":"Post-Hoc Explanation of Black-Box Classifiers Using Confident Itemsets","volume":"165","author":"Moradi","year":"2021","journal-title":"Expert Syst. Appl."},{"key":"ref_27","unstructured":"Vilone, G., and Longo, L. (2020). Explainable Artificial Intelligence: A Systematic Review. arXiv, Available online: https:\/\/arxiv.org\/abs\/2006.00093."},{"key":"ref_28","unstructured":"Miller, T. (2018). Explanation in Artificial Intelligence: Insights from the Social Sciences. arXiv, Available online: https:\/\/arxiv.org\/abs\/1706.07222."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"102301","DOI":"10.1016\/j.inffus.2024.102301","article-title":"Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions","volume":"106","author":"Longo","year":"2024","journal-title":"Inf. Fusion"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"36","DOI":"10.1145\/3233231","article-title":"The Mythos of Model Interpretability","volume":"61","author":"Lipton","year":"2018","journal-title":"Commun. ACM"},{"key":"ref_31","unstructured":"Rago, A., Palfi, B., Sukpanichnant, P., Nabli, H., Vivek, K., Kostopoulou, O., Kinross, J., and Toni, F. (2024). Exploring the Effect of Explanation Content and Format on User Comprehension and Trust. arXiv, Available online: https:\/\/arxiv.org\/abs\/2408.17401."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"101805","DOI":"10.1016\/j.inffus.2023.101805","article-title":"Explainable Artificial Intelligence (XAI): What We Know and What Is Left to Attain Trustworthy Artificial Intelligence","volume":"99","author":"Ali","year":"2023","journal-title":"Inf. Fusion"},{"key":"ref_33","unstructured":"Hofmann, H. (2025, February 10). Statlog (German Credit Data). UCI Machine Learning Repository. Available online: https:\/\/archive.ics.uci.edu\/dataset\/144\/statlog+german+credit+data."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"5","DOI":"10.1023\/A:1010933404324","article-title":"Random Forests","volume":"45","author":"Breiman","year":"2001","journal-title":"Mach. Learn."},{"key":"ref_35","unstructured":"De Graaf, M.M., and Malle, B.F. (2017, January 9\u201311). How People Explain Action (and Autonomous Intelligent Systems Should Too). Proceedings of the 2017 AAAI Fall Symposium Series, Arlington, VA, USA."},{"key":"ref_36","unstructured":"Nijholt, A., Reidsma, D., and Hondorp, H. (2009). A Study into Preferred Explanations of Virtual Agent Behavior. Intelligent Virtual Agents, Proceedings of the 9th International Workshop, Amsterdam, The Netherlands, 14\u201316 September 2009, Springer."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"89","DOI":"10.1016\/j.inffus.2021.05.009","article-title":"Notions of Explainability and Evaluation Approaches for Explainable Artificial Intelligence","volume":"76","author":"Vilone","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"615","DOI":"10.3390\/make3030032","article-title":"Classification of Explainable Artificial Intelligence Methods through Their Output Formats","volume":"3","author":"Vilone","year":"2021","journal-title":"Mach. Learn. Knowl. Extr."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Love, P., Fang, W., Matthews, J., Porter, S., Luo, H., and Ding, L. (2022). Explainable Artificial Intelligence (XAI): Precepts, Methods, and Opportunities for Research in Construction. arXiv.","DOI":"10.1016\/j.aei.2023.102024"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Malaviya, C., Lee, S., Roth, D., and Yatskar, M. (2024, January 16\u201321). What If You Said That Differently? How Explanation Formats Affect Human Feedback Efficacy and User Perception. Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Mexico City, Mexico.","DOI":"10.18653\/v1\/2024.naacl-long.168"},{"key":"ref_41","first-page":"1","article-title":"Evaluating the Impact of Human Explanation Strategies on Human-AI Visual Decision-Making","volume":"7","author":"Morrison","year":"2023","journal-title":"Proc. ACM Hum.\u2014Comput. Interact."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Chattopadhay, A., Sarkar, A., Howlader, P., and Balasubramanian, V.N. (2018, January 12\u201315). Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks. Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA.","DOI":"10.1109\/WACV.2018.00097"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"4328","DOI":"10.1109\/JBHI.2021.3111415","article-title":"Choquet Integral and Coalition Game-Based Ensemble of Deep Learning Models for COVID-19 Screening From Chest X-Ray Images","volume":"25","author":"Bhowal","year":"2021","journal-title":"IEEE J. Biomed. Health Inform."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2017, January 22\u201329). Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.74"},{"key":"ref_45","unstructured":"Simonyan, K., Vedaldi, A., and Zisserman, A. (2014). Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arXiv."},{"key":"ref_46","first-page":"1","article-title":"Visualizing Higher-Layer Features of a Deep Network","volume":"1341","author":"Erhan","year":"2009","journal-title":"Univ. Montr."},{"key":"ref_47","unstructured":"Shrikumar, A., Greenside, P., and Kundaje, A. (2017). Learning Important Features Through Propagating Activation Differences. arXiv."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Bach, S., Binder, A., Montavon, G., Klauschen, F., M\u00fcller, K.R., and Samek, W. (2015). On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLoS ONE, 10.","DOI":"10.1371\/journal.pone.0130140"},{"key":"ref_49","unstructured":"Precup, D., and Teh, Y.W. (2017, January 6\u201311). Axiomatic Attribution for Deep Networks. Proceedings of the 34th International Conference on Machine Learning (ICML 2017), Sydney, Australia."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Mostafa, S., Mondal, D., Beck, M.A., Bidinosti, C.P., Henry, C.J., and Stavness, I. (2022). Leveraging Guided Backpropagation to Select Convolutional Neural Networks for Plant Classification. Front. Artif. Intell., 5.","DOI":"10.3389\/frai.2022.871162"},{"key":"ref_51","unstructured":"Petsiuk, V., Das, A., and Saenko, K. (2018). RISE: Randomized Input Sampling for Explanation of Black-Box Models. arXiv."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Kumar, D., Wong, A., and Taylor, G.W. (2017, January 21\u201326). Explaining the Unexplained: A Class-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA.","DOI":"10.1109\/CVPRW.2017.215"},{"key":"ref_53","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv."},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Poli, J.-P., Ouerdane, W., and Pierrard, R. (2021, January 11\u201314). Generation of Textual Explanations in XAI: The Case of Semantic Annotation. Proceedings of the 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Luxembourg.","DOI":"10.1109\/FUZZ45933.2021.9494589"},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., Chattopadhyay, P., Elhoseiny, M., Sharma, T., Batra, D., Parikh, D., and Lee, S. (2018, January 8\u201314). Choose Your Neuron: Incorporating Domain Knowledge through Neuron-Importance. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01261-8_32"},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Bellini, V., Schiavone, A., Di Noia, T., Ragone, A., and Di Sciascio, E. (2018, January 6). Knowledge-Aware Autoencoders for Explainable Recommender Systems. Proceedings of the 3rd Workshop on Deep Learning for Recommender Systems, Vancouver, BC, Canada.","DOI":"10.1145\/3270323.3270327"},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Zhang, W., Paudel, B., Zhang, W., Bernstein, A., and Chen, H. (2019, January 11\u201315). Interaction Embeddings for Prediction and Explanation in Knowledge Graphs. Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, Melbourne, Australia.","DOI":"10.1145\/3289600.3291014"},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Lei, T., Barzilay, R., and Jaakkola, T. (2016). Rationalizing Neural Predictions. arXiv.","DOI":"10.18653\/v1\/D16-1011"},{"key":"ref_59","unstructured":"Barratt, S. (2017). InterpNet: Neural Introspection for Interpretable Deep Learning. arXiv."},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1007\/978-3-319-46493-0_1","article-title":"Generating Visual Explanations","volume":"Volume 14","author":"Hendricks","year":"2016","journal-title":"Computer Vision\u2013ECCV 2016: Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 11\u201314 October 2016, Proceedings, Part IV"},{"key":"ref_61","doi-asserted-by":"crossref","first-page":"125562","DOI":"10.1109\/ACCESS.2019.2937521","article-title":"Human-Centric AI for Trustworthy IoT Systems with Explainable Multilayer Perceptrons","volume":"7","author":"Muttukrishnan","year":"2019","journal-title":"IEEE Access"},{"key":"ref_62","unstructured":"Bennetot, A., Laurent, J.L., Chatila, R., and D\u00edaz-Rodr\u00edguez, N. (2019). Towards Explainable Neural-Symbolic Visual Reasoning. arXiv."},{"key":"ref_63","first-page":"1350","article-title":"Interpretable Classifiers Using Rules and Bayesian Analysis: Building a Better Stroke Prediction Model","volume":"110","author":"Letham","year":"2015","journal-title":"J. Am. Stat. Assoc."},{"key":"ref_64","doi-asserted-by":"crossref","unstructured":"Dubitzky, W., Wolkenhauer, O., Cho, K.H., and Yokota, H. (2013). Rule-Based Methods. Encyclopedia of Systems Biology, Springer.","DOI":"10.1007\/978-1-4419-9863-7"},{"key":"ref_65","doi-asserted-by":"crossref","first-page":"81","DOI":"10.1007\/BF00116251","article-title":"Induction of Decision Trees","volume":"1","author":"Quinlan","year":"1986","journal-title":"Mach. Learn."},{"key":"ref_66","doi-asserted-by":"crossref","unstructured":"Ribeiro, M.T., Singh, S., and Guestrin, C. (2018, January 2\u20137). Anchors: High-Precision Model-Agnostic Explanations. Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, the Thirtieth Innovative Applications of Artificial Intelligence Conference, and the Eighth AAAI Symposium on Educational Advances in Artificial Intelligence (AAAI\u201918\/IAAI\u201918\/EAAI\u201918), New Orleans, LA, USA.","DOI":"10.1609\/aaai.v32i1.11491"},{"key":"ref_67","unstructured":"Precup, D., and Teh, Y.W. (2017, January 6\u201311). Scalable Bayesian Rule Lists. Proceedings of the 34th International Conference on Machine Learning (ICML 2017), Sydney, Australia."},{"key":"ref_68","doi-asserted-by":"crossref","first-page":"52015","DOI":"10.1109\/ACCESS.2021.3062763","article-title":"Reg-Rules: An Explainable Rule-Based Ensemble Learner for Classification","volume":"9","author":"Almutairi","year":"2021","journal-title":"IEEE Access"},{"key":"ref_69","doi-asserted-by":"crossref","unstructured":"Lakkaraju, H., Bach, S.H., and Leskovec, J. (2016, January 13\u201317). Interpretable Decision Sets: A Joint Framework for Description and Prediction. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA.","DOI":"10.1145\/2939672.2939874"},{"key":"ref_70","doi-asserted-by":"crossref","first-page":"9275318","DOI":"10.1155\/2017\/2460174","article-title":"An Interpretable Classification Framework for Information Extraction from Online Healthcare Forums","volume":"2017","author":"Gao","year":"2017","journal-title":"J. Healthc. Eng."},{"key":"ref_71","doi-asserted-by":"crossref","first-page":"303","DOI":"10.1016\/0010-4809(75)90009-9","article-title":"Computer-Based Consultations in Clinical Therapeutics: Explanation and Rule Acquisition Capabilities of the MYCIN System","volume":"8","author":"Shortliffe","year":"1975","journal-title":"Comput. Biomed. Res."},{"key":"ref_72","doi-asserted-by":"crossref","first-page":"17001","DOI":"10.1109\/ACCESS.2019.2893141","article-title":"Evolving Rule-Based Explainable Artificial Intelligence for Unmanned Aerial Vehicles","volume":"7","author":"Keneni","year":"2019","journal-title":"IEEE Access"},{"key":"ref_73","doi-asserted-by":"crossref","unstructured":"Bride, H., Dong, J., Dong, J.S., and H\u00f3u, Z. (2018, January 12\u201316). Towards Dependable and Explainable Machine Learning Using Automated Reasoning. Proceedings of the Formal Methods and Software Engineering: 20th International Conference on Formal Engineering Methods, ICFEM 2018, Gold Coast, QLD, Australia. Proceedings 20.","DOI":"10.1007\/978-3-030-02450-5_25"},{"key":"ref_74","first-page":"295","article-title":"Accuracy vs. Comprehensibility in Data Mining Models","volume":"Volume 1","author":"Johansson","year":"2004","journal-title":"Proceedings of the Seventh International Conference on Information Fusion"},{"key":"ref_75","doi-asserted-by":"crossref","first-page":"103457","DOI":"10.1016\/j.artint.2021.103457","article-title":"GlocalX\u2014From Local to Global Explanations of Black Box AI Models","volume":"294","author":"Setzu","year":"2021","journal-title":"Artif. Intell."},{"key":"ref_76","doi-asserted-by":"crossref","unstructured":"Asano, K., and Chun, J. (2021, January 4\u20136). Post-Hoc Explanation Using a Mimic Rule for Numerical Data. Proceedings of the ICAART (2), Virtual, Online.","DOI":"10.5220\/0010238907680774"},{"key":"ref_77","doi-asserted-by":"crossref","first-page":"1189","DOI":"10.1214\/aos\/1013203451","article-title":"Greedy Function Approximation: A Gradient Boosting Machine","volume":"29","author":"Friedman","year":"2001","journal-title":"Ann. Stat."},{"key":"ref_78","doi-asserted-by":"crossref","unstructured":"Hastie, T., Tibshirani, R., and Friedman, J. (2001). The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer.","DOI":"10.1007\/978-0-387-21606-5"},{"key":"ref_79","doi-asserted-by":"crossref","first-page":"1340","DOI":"10.1093\/bioinformatics\/btq134","article-title":"Permutation Importance: A Corrected Feature Importance Measure","volume":"26","author":"Altmann","year":"2010","journal-title":"Bioinformatics"},{"key":"ref_80","doi-asserted-by":"crossref","unstructured":"Marc\u00edlio, W.E., and Eler, D.M. (2020, January 7\u201310). From Explanations to Feature Selection: Assessing SHAP Values as Feature Selection Mechanism. Proceedings of the 2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Recife\/Porto de Galinhas, Brazil.","DOI":"10.1109\/SIBGRAPI51738.2020.00053"},{"key":"ref_81","doi-asserted-by":"crossref","first-page":"44","DOI":"10.1080\/10618600.2014.907095","article-title":"Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation","volume":"24","author":"Goldstein","year":"2015","journal-title":"J. Comput. Graph. Stat."},{"key":"ref_82","unstructured":"Caruana, R., and Lou, Y. (2018, January 10\u201315). Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). Proceedings of the 35th International Conference on Machine Learning (ICML 2018), Stockholm, Sweden."},{"key":"ref_83","doi-asserted-by":"crossref","unstructured":"Tan, S., Caruana, R., Hooker, G., and Lou, Y. (2018, January 2\u20137). Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation. Proceedings of the 2018 AAAI\/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA.","DOI":"10.1145\/3278721.3278725"},{"key":"ref_84","first-page":"1","article-title":"An Efficient Explanation of Individual Classifications Using Game Theory","volume":"11","author":"Strumbelj","year":"2010","journal-title":"J. Mach. Learn. Res."},{"key":"ref_85","unstructured":"\u0160trumbelj, E., and Kononenko, I. (2008, January 2\u20135). Towards a Model Independent Method for Explaining Classification for Individual Instances. Proceedings of the Data Warehousing and Knowledge Discovery: 10th International Conference, DaWaK 2008, Turin, Italy. Proceedings 10."},{"key":"ref_86","doi-asserted-by":"crossref","first-page":"95","DOI":"10.1007\/s10115-017-1116-3","article-title":"Auditing Black-Box Models for Indirect Influence","volume":"54","author":"Adler","year":"2018","journal-title":"Knowl. Inf. Syst."},{"key":"ref_87","unstructured":"Alain, G., and Bengio, Y. (2016). Understanding Intermediate Layers Using Linear Classifier Probes. arXiv."},{"key":"ref_88","unstructured":"Fr\u00e4mling, K. (1996, January 1\u20132). Explaining Results of Neural Networks by Contextual Importance and Utility. Proceedings of the AISB, Brighton, UK."},{"key":"ref_89","unstructured":"Juscafresa, A.N. (2022). An Introduction to Explainable Artificial Intelligence with LIME and SHAP. [Bachelor\u2019s Thesis, Degree in Computer Engineering, Universitat de Barcelona]. Available online: https:\/\/www.google.com\/url?sa=t&source=web&rct=j&opi=89978449&url=https:\/\/sergioescalera.com\/wp-content\/uploads\/2022\/06\/presentacio_tfg_nieto_juscafresa_aleix.pdf&ved=2ahUKEwjni4PW4sCNAxXF7TgGHbtUAJsQFnoECBkQAQ&usg=AOvVaw35mUA85cyJvPfQ2SaFsHTS."},{"key":"ref_90","unstructured":"Chen, J., Song, L., Wainwright, M., and Jordan, M. (2018, January 10\u201315). Learning to Explain: An Information-Theoretic Perspective on Model Interpretation. Proceedings of the 35th International Conference on Machine Learning 2018, Stockholm, Sweden."},{"key":"ref_91","unstructured":"Verma, S., Boonsanong, V., Hoang, M., Hines, K.E., Dickerson, J.P., and Shah, C. (2020). Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review. arXiv."},{"key":"ref_92","unstructured":"Bach, F., and Blei, D. (2015, January 6\u201311). Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), Lille, France."},{"key":"ref_93","unstructured":"Yang, S.C.H., and Shafto, P. (2017, January 9). Explainable Artificial Intelligence via Bayesian Teaching. Proceedings of the NIPS 2017 Workshop on Teaching Machines, Robots, and Humans, Long Beach, CA, USA."},{"key":"ref_94","unstructured":"Caruana, R., Kangarloo, H., Dionisio, J.D., Sinha, U., and Johnson, D. (1999). Case-Based Explanation of Non-Case-Based Learning Methods. Proceedings of the AMIA Symposium, American Medical Informatics Association."},{"key":"ref_95","first-page":"1064","article-title":"explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning","volume":"26","author":"Spinner","year":"2019","journal-title":"IEEE Trans. Vis. Comput. Graph."},{"key":"ref_96","unstructured":"Chaudhuri, K., and Sugiyama, M. (2019, January 16\u201318). Interpreting Black Box Predictions Using Fisher Kernels. Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS 2019), Naha, Okinawa, Japan."},{"key":"ref_97","unstructured":"Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (2018). Explanations Based on the Missing: Towards Contrastive Explanations with Pertinent Negatives. Advances in Neural Information Processing Systems 31 (NeurIPS 2018), Curran Associates, Inc."},{"key":"ref_98","doi-asserted-by":"crossref","first-page":"1264372","DOI":"10.3389\/frai.2023.1264372","article-title":"Explainability as the Key Ingredient for AI Adoption in Industry 5.0 Settings","volume":"6","author":"Agostinho","year":"2023","journal-title":"Front. Artif. Intell."},{"key":"ref_99","first-page":"1","article-title":"A Comprehensive Survey of Explainable Artificial Intelligence (XAI) Methods: Exploring Transparency and Interpretability","volume":"Volume 14306","author":"Zhang","year":"2023","journal-title":"Web Information Systems Engineering\u2014WISE 2023"},{"key":"ref_100","unstructured":"Dvorak, J., Kopp, T., Kinkel, S., and Lanza, G. (2022, January 14\u201316). Explainable AI: A Key Driver for AI Adoption, a Mistaken Concept, or a Practically Irrelevant Feature?. Proceedings of the 4th UR-AI Symposium, Villingen-Schwenningen, Germany."},{"key":"ref_101","unstructured":"Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., and Kim, B. (2018). Sanity Checks for Saliency Maps. arXiv."},{"key":"ref_102","doi-asserted-by":"crossref","unstructured":"Kim, B., Seo, J., Jeon, S., Koo, J., Choe, J., and Jeon, T. (2019). Why Are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps. arXiv.","DOI":"10.1109\/ICCVW.2019.00510"},{"key":"ref_103","doi-asserted-by":"crossref","unstructured":"Slack, D., Song, L., Koyejo, S.O., Padhye, J., Dhurandhar, A., Zhang, Y., Sattigeri, P., Hughes, T., Mojsilovi\u0107, A., and Varshney, K.R. (2020). Fooling LIME and SHAP: Adversarial Attacks on Post-Hoc Explanation Methods. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAccT), Barcelona, Spain, 27\u201330 January 2020, ACM.","DOI":"10.1145\/3375627.3375830"},{"key":"ref_104","first-page":"2242","article-title":"Interpretation of Neural Networks is Fragile","volume":"Volume 97","author":"Chaudhuri","year":"2019","journal-title":"Proceedings of the 36th International Conference on Machine Learning (ICML 2019)"},{"key":"ref_105","doi-asserted-by":"crossref","unstructured":"Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., and Elhadad, N. (2015, January 10\u201313). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia.","DOI":"10.1145\/2783258.2788613"},{"key":"ref_106","unstructured":"Carter, S., Kim, B., Brown, R., and Doshi-Velez, F. (2019, January 5\u20138). Visualizing and Understanding High-Dimensional Models in Machine Learning. Proceedings of the 2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA), Washington, DC, USA."},{"key":"ref_107","unstructured":"Molnar, C. (2020). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, Springer."},{"key":"ref_108","first-page":"121","article-title":"Ethics of Explainability: The Social and Psychological Implications of Algorithmic Decisions","volume":"25","author":"Woutersen","year":"2021","journal-title":"J. Ethics"},{"key":"ref_109","unstructured":"Dave, P., and Dastin, J. (2025, March 20). Money, Mimicry, and Mind Control: Big Tech Slams Ethics Brakes on AI. Reuters, Available online: https:\/\/news.trust.org\/item\/20210908095953-jtdiz."},{"key":"ref_110","first-page":"25","article-title":"Explaining Explanations: A Taxonomy of AI Interpretability and Its Implications for Trust and User Behavior","volume":"8","author":"Hoffman","year":"2018","journal-title":"ACM Trans. Interact. Intell. Syst."},{"key":"ref_111","unstructured":"Anderson, J.R. (2005). Cognitive Psychology and Its Implications, W.H. Freeman. [6th ed.]."},{"key":"ref_112","unstructured":"Newell, A., and Simon, H.A. (1972). Human Problem Solving, Prentice-Hall."},{"key":"ref_113","unstructured":"and Jowett, B. (1999). The Republic, Dover Publications."},{"key":"ref_114","unstructured":"Cottingham, J. (1996). Meditations on First Philosophy, Cambridge University Press."},{"key":"ref_115","unstructured":"Piaget, J. (1972). The Psychology of the Child, Basic Books."},{"key":"ref_116","unstructured":"Cole, M., John-Steiner, V., Scribner, S., and Souberman, E. (1978). Mind in Society: The Development of Higher Psychological Processes, Harvard University Press."},{"key":"ref_117","unstructured":"Minsky, M. (1974). A Framework for Representing Knowledge, Massachusetts Institute of Technology. Technical Report."},{"key":"ref_118","unstructured":"McCarthy, J.J., Minsky, M.L., and Rochester, N. (1959). Artificial Intelligence, Research Laboratory of Electronics (RLE), Massachusetts Institute of Technology (MIT). Available online: https:\/\/dspace.mit.edu\/bitstream\/handle\/1721.1\/52263\/RLE_QPR_053_XIII.pdf."},{"key":"ref_119","unstructured":"Brain, M. (1959). Programs with Common Sense. Mechanization of Thought Processes, Her Majesty\u2019s Stationery Office. Available online: https:\/\/stacks.stanford.edu\/file\/druid:yt623dt2417\/yt623dt2417.pdf."},{"key":"ref_120","first-page":"84","article-title":"Artificial Intelligence, Language, and the Study of Knowledge","volume":"1","author":"Goldstein","year":"1977","journal-title":"Cogn. Sci."},{"key":"ref_121","unstructured":"Sowa, J.F. (2000). Knowledge Representation: Logical, Philosophical, and Computational Foundations, Brooks\/Cole."},{"key":"ref_122","unstructured":"Di Maio, M. (2025, February 17). Mindful Technology. Buddhist Door, Online Article, Hong Kong, 2019. Available online: https:\/\/www.buddhistdoor.net\/features\/knowledge-representation-in-the-nalanda-buddhist-tradition."},{"key":"ref_123","doi-asserted-by":"crossref","unstructured":"Guarino, N. (2009). The Ontological Level: Revisiting 30 Years of Knowledge Representation. Conceptual Modeling: Foundations and Applications, Springer.","DOI":"10.1007\/978-3-642-02463-4_4"},{"key":"ref_124","doi-asserted-by":"crossref","unstructured":"Di Maio, P. (2020). Neurosymbolic Knowledge Representation for Explainable and Trustworthy AI. Preprints, 2020010163.","DOI":"10.20944\/preprints202001.0163.v1"},{"key":"ref_125","doi-asserted-by":"crossref","unstructured":"Besold, T.R., d\u2019Avila Garcez, A., Bader, S., Bowman, H., Domingos, P., Hitzler, P., K\u00fchnberger, K.-U., Lamb, L.C., Lowd, D., and Moura, J.M.F. (2021). Neural-Symbolic Learning and Reasoning: A Survey and Interpretation 1. Neuro-Symbolic Artificial Intelligence: The State of the Art, IOS Press.","DOI":"10.3233\/FAIA210348"},{"key":"ref_126","doi-asserted-by":"crossref","first-page":"103627","DOI":"10.1016\/j.artint.2021.103627","article-title":"Knowledge Graphs as Tools for Explainable Machine Learning: A Survey","volume":"302","author":"Tiddi","year":"2022","journal-title":"Artif. Intell."},{"key":"ref_127","doi-asserted-by":"crossref","first-page":"54","DOI":"10.1145\/3241036","article-title":"The Seven Tools of Causal Inference, with Reflections on Machine Learning","volume":"62","author":"Pearl","year":"2019","journal-title":"Commun. ACM"},{"key":"ref_128","doi-asserted-by":"crossref","unstructured":"Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Morgan Kaufmann.","DOI":"10.1016\/B978-0-08-051489-5.50008-4"},{"key":"ref_129","doi-asserted-by":"crossref","first-page":"424","DOI":"10.2307\/1912791","article-title":"Investigating Causal Relations by Econometric Models and Cross-Spectral Methods","volume":"37","author":"Granger","year":"1969","journal-title":"Econometrica"},{"key":"ref_130","doi-asserted-by":"crossref","unstructured":"Spirtes, P., Glymour, C., and Scheines, R. (2000). Causation, Prediction, and Search, MIT Press. [2nd ed.].","DOI":"10.7551\/mitpress\/1754.001.0001"},{"key":"ref_131","doi-asserted-by":"crossref","unstructured":"Pearl, J. (2009). Causality: Models, Reasoning, and Inference, Cambridge University Press. [2nd ed.].","DOI":"10.1017\/CBO9780511803161"},{"key":"ref_132","unstructured":"Merton, R.K. (1967). The Sociology of Science: Theoretical and Empirical Investigations, University of Chicago Press."},{"key":"ref_133","unstructured":"Popper, K. (2002). The Logic of Scientific Discovery, Routledge."},{"key":"ref_134","doi-asserted-by":"crossref","first-page":"135","DOI":"10.1086\/286983","article-title":"Studies in the Logic of Explanation","volume":"15","author":"Hempel","year":"1948","journal-title":"Philos. Sci."},{"key":"ref_135","first-page":"25007","article-title":"A Psychological Theory of Explainability","volume":"Volume 162","author":"Chaudhuri","year":"2022","journal-title":"Proceedings of the 39th International Conference on Machine Learning (ICML 2022)"},{"key":"ref_136","unstructured":"Johnson-Laird, P.N. (1983). Mental Models: Towards a Cognitive Science of Language and Reasoning, Cambridge University Press."},{"key":"ref_137","first-page":"443","article-title":"The Rational Analysis of Deductive Reasoning","volume":"106","author":"Chater","year":"1999","journal-title":"Psychol. Rev."},{"key":"ref_138","unstructured":"Baron, J. (2008). Thinking and Deciding, Cambridge University Press. [4th ed.]."},{"key":"ref_139","doi-asserted-by":"crossref","unstructured":"Newton, I. (1687). Philosophi\u00e6 Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), Royal Society.","DOI":"10.5479\/sil.52126.39088015628399"},{"key":"ref_140","doi-asserted-by":"crossref","unstructured":"Hume, D. (1748). An Enquiry Concerning Human Understanding, A. Millar.","DOI":"10.1093\/oseo\/instance.00032980"},{"key":"ref_141","unstructured":"Hempel, C.G. (1965). Aspects of Scientific Explanation and Other Essays in the Philosophy of Science, Free Press."},{"key":"ref_142","unstructured":"Mitchell, T.M. (1997). Machine Learning, McGraw-Hill."},{"key":"ref_143","unstructured":"Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep Learning, MIT Press. Available online: https:\/\/www.deeplearningbook.org."},{"key":"ref_144","unstructured":"Bhattacharjee, A., Moraffah, R., Garland, J., and Liu, H. (2024). Towards LLM-Guided Causal Explainability for Black-Box Text Classifiers. arXiv."},{"key":"ref_145","unstructured":"Kroeger, N., Ley, D., Krishna, S., Agarwal, C., and Lakkaraju, H. (2023). In-context explainers: Harnessing LLMs for explaining black box models. arXiv."},{"key":"ref_146","doi-asserted-by":"crossref","unstructured":"Nguyen, V.B., Schl\u00f6tterer, J., and Seifert, C. (2023, January 26\u201328). From Black Boxes to Conversations: Incorporating XAI in a Conversational Agent. Proceedings of the World Conference on Explainable Artificial Intelligence, Lisboa, Portugal.","DOI":"10.1007\/978-3-031-44070-0_4"},{"key":"ref_147","doi-asserted-by":"crossref","first-page":"873","DOI":"10.1038\/s42256-023-00692-8","article-title":"Explaining machine learning models with interactive natural language conversations using TalkToModel","volume":"5","author":"Slack","year":"2023","journal-title":"Nat. Mach. Intell."},{"key":"ref_148","unstructured":"Zytek, A., Pid\u00f2, S., and Veeramachaneni, K. (2024). LLMs for XAI: Future directions for explaining explanations. arXiv."},{"key":"ref_149","doi-asserted-by":"crossref","unstructured":"Burton, J., Al Moubayed, N., and Enshaei, A. (2023, January 18\u201323). Natural Language Explanations for Machine Learning Classification Decisions. Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), Queensland, Australia.","DOI":"10.1109\/IJCNN54540.2023.10191637"},{"key":"ref_150","unstructured":"Mavrepis, P., Makridis, G., Fatouros, G., Koukos, V., Separdani, M.M., and Kyriazis, D. (2024). XAI for all: Can large language models simplify explainable AI?. arXiv."},{"key":"ref_151","unstructured":"Cambria, E., Malandri, L., Mercorio, F., Nobani, N., and Seveso, A. (2024). XAI meets LLMs: A survey of the relation between explainable AI and large language models. arXiv."},{"key":"ref_152","unstructured":"Lareo, X. (2025, March 02). Large Language Models (LLM). European Data Protection Supervisor. Available online: https:\/\/edps.europa.eu."},{"key":"ref_153","doi-asserted-by":"crossref","unstructured":"Salmon, W.C. (1971). Statistical Explanation and Statistical Relevance, University of Pittsburgh Press.","DOI":"10.2307\/j.ctt6wrd9p"},{"key":"ref_154","doi-asserted-by":"crossref","unstructured":"Salmon, W.C. (1984). Scientific Explanation and the Causal Structure of the World, Princeton University Press.","DOI":"10.1515\/9780691221489"},{"key":"ref_155","doi-asserted-by":"crossref","first-page":"5","DOI":"10.2307\/2024924","article-title":"Explanation and scientific understanding","volume":"71","author":"Friedman","year":"1974","journal-title":"J. Philos."},{"key":"ref_156","doi-asserted-by":"crossref","first-page":"507","DOI":"10.1086\/289019","article-title":"Explanatory unification","volume":"48","author":"Kitcher","year":"1981","journal-title":"Philos. Sci."},{"key":"ref_157","first-page":"143","article-title":"The pragmatics of explanation","volume":"14","year":"1977","journal-title":"Am. Philos. Q."}],"container-title":["Algorithms"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-4893\/18\/6\/306\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T17:39:11Z","timestamp":1760031551000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-4893\/18\/6\/306"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,23]]},"references-count":157,"journal-issue":{"issue":"6","published-online":{"date-parts":[[2025,6]]}},"alternative-id":["a18060306"],"URL":"https:\/\/doi.org\/10.3390\/a18060306","relation":{},"ISSN":["1999-4893"],"issn-type":[{"value":"1999-4893","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,5,23]]}}}