{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,5]],"date-time":"2026-01-05T11:12:19Z","timestamp":1767611539578,"version":"3.41.0"},"reference-count":130,"publisher":"Walter de Gruyter GmbH","issue":"1","license":[{"start":{"date-parts":[[2025,2,1]],"date-time":"2025-02-01T00:00:00Z","timestamp":1738368000000},"content-version":"unspecified","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,2,25]]},"abstract":"<jats:title>Abstract<\/jats:title>\n               <jats:p>As artificial intelligence (AI) increasingly permeates high-stakes domains such as healthcare, transportation, and law enforcement, ensuring its trustworthiness has become a critical challenge. This article proposes an integrative Explainable AI (XAI) framework to address the challenges of interpretability, explainability, interactivity, and robustness. By combining XAI methods, incorporating human-AI interaction and using suitable evaluation techniques, the implementation of this framework serves as a holistic XAI approach. The article discusses the framework\u2019s contribution to trustworthy AI and gives an outlook on open challenges related to interdisciplinary collaboration, AI generalization and AI evaluation.<\/jats:p>","DOI":"10.1515\/itit-2025-0007","type":"journal-article","created":{"date-parts":[[2025,4,30]],"date-time":"2025-04-30T07:58:33Z","timestamp":1745999913000},"page":"20-45","source":"Crossref","is-referenced-by-count":1,"title":["Toward trustworthy AI with integrative explainable AI frameworks"],"prefix":"10.1515","volume":"67","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9415-6254","authenticated-orcid":false,"given":"Bettina","family":"Finzel","sequence":"first","affiliation":[{"name":"University of Bamberg, Cognitive Systems , Bamberg , Germany"}]}],"member":"374","published-online":{"date-parts":[[2025,5,1]]},"reference":[{"unstructured":"European Parliament and Council of the European Union, \u201cRegulation (eu) 2024\/1689 of the european parliament and of the council of 13 june 2024 laying down harmonised rules on artificial intelligence and amending regulations (ec) no 300\/2008, (eu) no 167\/2013, (eu) no 168\/2013, (eu) 2018\/858, (eu) 2018\/1139 and (eu) 2019\/2144 and directives 2014\/90\/eu, (eu) 2016\/797 and (eu) 2020\/1828 (artificial intelligence act) (text with eea relevance),\u201d [Online]. Available: https:\/\/eur-lex.europa.eu\/eli\/reg\/2024\/1689\/oj.","key":"2025053021205068381_j_itit-2025-0007_ref_001"},{"doi-asserted-by":"crossref","unstructured":"N. A. Smuha, \u201cThe eu approach to ethics guidelines for trustworthy artificial intelligence,\u201d Comput. Law Rev. Int., vol.\u00a020, no.\u00a04, pp.\u00a097\u2013106, 2019, https:\/\/doi.org\/10.9785\/cri-2019-200402.","key":"2025053021205068381_j_itit-2025-0007_ref_002","DOI":"10.9785\/cri-2019-200402"},{"doi-asserted-by":"crossref","unstructured":"S. Ali, T. Abuhmed, S. El-Sappagh, K. Muhammad, J. M. Alonso-Moral, R. Confalonieri, \u201cExplainable artificial intelligence (XAI): what we know and what is left to attain trustworthy artificial intelligence,\u201d Inf. Fusion, vol.\u00a099, p.\u00a0101805, 2023. [Online]. Available: https:\/\/doi.org\/10.1016\/j.inffus.2023.101805.","key":"2025053021205068381_j_itit-2025-0007_ref_003","DOI":"10.1016\/j.inffus.2023.101805"},{"doi-asserted-by":"crossref","unstructured":"A. Chaddad, J. Peng, J. Xu, and A. Bouridane, \u201cSurvey of explainable AI techniques in healthcare,\u201d Sensors, vol.\u00a023, no.\u00a02, p.\u00a0634, 2023. [Online]. Available: https:\/\/doi.org\/10.3390\/s23020634.","key":"2025053021205068381_j_itit-2025-0007_ref_004","DOI":"10.3390\/s23020634"},{"doi-asserted-by":"crossref","unstructured":"D. Kaur, S. Uslu, K. J. Rittichier, and A. Durresi, \u201cTrustworthy artificial intelligence: a review,\u201d ACM Comput. Surv., vol.\u00a055, no.\u00a02, pp.\u00a039:1\u201339:38, 2023. [Online]. Available: https:\/\/doi.org\/10.1145\/3491209.","key":"2025053021205068381_j_itit-2025-0007_ref_005","DOI":"10.1145\/3491209"},{"doi-asserted-by":"crossref","unstructured":"N. D. Rodr\u00edguez, J. D. Ser, M. Coeckelbergh, M. L. de Prado, E. Herrera-Viedma, and F. Herrera, \u201cConnecting the dots in trustworthy artificial intelligence: from AI principles, ethics, and key requirements to responsible AI systems and regulation,\u201d Inf. Fusion, vol.\u00a099, p.\u00a0101896, 2023. [Online]. Available: https:\/\/doi.org\/10.1016\/j.inffus.2023.101896.","key":"2025053021205068381_j_itit-2025-0007_ref_006","DOI":"10.1016\/j.inffus.2023.101896"},{"doi-asserted-by":"crossref","unstructured":"B. Shneiderman, \u201cHuman-centered artificial intelligence: reliable, safe & trustworthy,\u201d Int. J. Hum. Comput. Interact., vol.\u00a036, no.\u00a06, pp.\u00a0495\u2013504, 2020. [Online]. Available: https:\/\/doi.org\/10.1080\/10447318.2020.1741118.","key":"2025053021205068381_j_itit-2025-0007_ref_007","DOI":"10.1080\/10447318.2020.1741118"},{"doi-asserted-by":"crossref","unstructured":"B. Kim and F. Doshi-Velez, \u201cMachine learning techniques for accountability,\u201d AI Mag, vol.\u00a042, no.\u00a01, pp.\u00a047\u201352, 2021. [Online]. Available: https:\/\/ojs.aaai.org\/index.php\/aimagazine\/article\/view\/7481.","key":"2025053021205068381_j_itit-2025-0007_ref_008","DOI":"10.1002\/j.2371-9621.2021.tb00010.x"},{"doi-asserted-by":"crossref","unstructured":"C. Rudin, \u201cStop explaining black box machine learning models for high stakes decisions and use interpretable models instead,\u201d Nat. Mach. Intell., vol.\u00a01, no.\u00a05, pp.\u00a0206\u2013215, 2019. [Online]. Available: https:\/\/doi.org\/10.1038\/s42256-019-0048-x.","key":"2025053021205068381_j_itit-2025-0007_ref_009","DOI":"10.1038\/s42256-019-0048-x"},{"doi-asserted-by":"crossref","unstructured":"A. P\u00e1ez, \u201cThe pragmatic turn in explainable artificial intelligence (XAI),\u201d Minds Mach., vol.\u00a029, no.\u00a03, pp.\u00a0441\u2013459, 2019. [Online]. Available: https:\/\/doi.org\/10.1007\/s11023-019-09502-w.","key":"2025053021205068381_j_itit-2025-0007_ref_010","DOI":"10.1007\/s11023-019-09502-w"},{"doi-asserted-by":"crossref","unstructured":"A. Holzinger, \u201cInteractive machine learning for health informatics: when do we need the human-in-the-loop?\u201d Brain Informatics, vol.\u00a03, no.\u00a02, pp.\u00a0119\u2013131, 2016. [Online]. Available: https:\/\/doi.org\/10.1007\/s40708-016-0042-6.","key":"2025053021205068381_j_itit-2025-0007_ref_011","DOI":"10.1007\/s40708-016-0042-6"},{"doi-asserted-by":"crossref","unstructured":"B. Finzel, \u201cHuman-centered explanations: lessons learned from image classification for medical and clinical decision making,\u201d K\u00fcnstliche Intell, vol.\u00a038, no.\u00a03, pp.\u00a0157\u2013167, 2024. [Online]. Available: https:\/\/doi.org\/10.1007\/s13218-024-00835-y.","key":"2025053021205068381_j_itit-2025-0007_ref_012","DOI":"10.1007\/s13218-024-00835-y"},{"doi-asserted-by":"crossref","unstructured":"S. H. Muggleton, U. Schmid, C. Zeller, A. Tamaddoni-Nezhad, and T. R. Besold, \u201cUltra-strong machine learning: comprehensibility of programs learned with ILP,\u201d Mach. Learn., vol.\u00a0107, no.\u00a07, pp.\u00a01119\u20131140, 2018. [Online]. Available: https:\/\/doi.org\/10.1007\/s10994-018-5707-3.","key":"2025053021205068381_j_itit-2025-0007_ref_013","DOI":"10.1007\/s10994-018-5707-3"},{"doi-asserted-by":"crossref","unstructured":"S. Amershi, M. Cakmak, W. B. Knox, and T. Kulesza, \u201cPower to the people: the role of humans in interactive machine learning,\u201d AI Mag, vol.\u00a035, no.\u00a04, pp.\u00a0105\u2013120, 2014. [Online]. Available: https:\/\/doi.org\/10.1609\/aimag.v35i4.2513.","key":"2025053021205068381_j_itit-2025-0007_ref_014","DOI":"10.1609\/aimag.v35i4.2513"},{"doi-asserted-by":"crossref","unstructured":"K. Gobel, C. Niessen, S. Seufert, and U. Schmid, \u201cExplanatory machine learning for justified trust in human-ai collaboration: experiments on file deletion recommendations,\u201d Front. Artif. Intell., vol.\u00a05, 2022, Art. no. 919534. [Online]. Available: https:\/\/doi.org\/10.3389\/frai.2022.919534.","key":"2025053021205068381_j_itit-2025-0007_ref_015","DOI":"10.3389\/frai.2022.919534"},{"doi-asserted-by":"crossref","unstructured":"G. Schwalbe and B. Finzel, \u201cA comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts,\u201d Data Min. Knowl. Discov., vol.\u00a038, pp.\u00a03043\u20133101, 2023. [Online]. Available: https:\/\/doi.org\/10.1007\/s10618-022-00867-8.","key":"2025053021205068381_j_itit-2025-0007_ref_016","DOI":"10.1007\/s10618-022-00867-8"},{"doi-asserted-by":"crossref","unstructured":"D. Gunning and D. W. Aha, \u201cDarpa\u2019s explainable artificial intelligence (XAI) program,\u201d AI Mag, vol.\u00a040, no.\u00a02, pp.\u00a044\u201358, 2019. [Online]. Available: https:\/\/doi.org\/10.1609\/aimag.v40i2.2850.","key":"2025053021205068381_j_itit-2025-0007_ref_017","DOI":"10.1609\/aimag.v40i2.2850"},{"doi-asserted-by":"crossref","unstructured":"A. Adadi and M. Berrada, \u201cPeeking inside the black-box: a survey on explainable artificial intelligence (XAI),\u201d IEEE Access., vol.\u00a06, pp.\u00a052 138\u201352 160, 2018. [Online]. Available: https:\/\/doi.org\/10.1109\/ACCESS.2018.2870052.","key":"2025053021205068381_j_itit-2025-0007_ref_018","DOI":"10.1109\/ACCESS.2018.2870052"},{"unstructured":"C. T. Lewis and C. Short, A Latin Dictionary, Oxford, Clarendon Press, 1879.","key":"2025053021205068381_j_itit-2025-0007_ref_019"},{"doi-asserted-by":"crossref","unstructured":"R. R. Hoffman, T. Miller, G. Klein, S. T. Mueller, and W. J. Clancey, \u201cIncreasing the value of XAI for users: a psychological perspective,\u201d K\u00fcnstliche Intell., vol.\u00a037, no.\u00a02, pp.\u00a0237\u2013247, 2023. [Online]. Available: https:\/\/doi.org\/10.1007\/s13218-023-00806-9.","key":"2025053021205068381_j_itit-2025-0007_ref_020","DOI":"10.1007\/s13218-023-00806-9"},{"doi-asserted-by":"crossref","unstructured":"K. J. Rohlfing, P. Cimiano, I. Scharlau, T. Matzner, H. M. Buhl, H. Buschmeier, \u201cExplanation as a social practice: toward a conceptual framework for the social design of AI systems,\u201d IEEE Trans. Cogn. Dev. Syst., vol.\u00a013, no.\u00a03, pp.\u00a0717\u2013728, 2021. [Online]. Available: https:\/\/doi.org\/10.1109\/TCDS.2020.3044366.","key":"2025053021205068381_j_itit-2025-0007_ref_021","DOI":"10.1109\/TCDS.2020.3044366"},{"doi-asserted-by":"crossref","unstructured":"T. Miller, \u201cExplanation in artificial intelligence: insights from the social sciences,\u201d Artif. Intell., vol.\u00a0267, pp.\u00a01\u201338, 2019. [Online]. Available: https:\/\/doi.org\/10.1016\/j.artint.2018.07.007.","key":"2025053021205068381_j_itit-2025-0007_ref_022","DOI":"10.1016\/j.artint.2018.07.007"},{"doi-asserted-by":"crossref","unstructured":"F. C. Keil, \u201cExplanation and understanding,\u201d Annu. Rev. Psychol., vol.\u00a057, pp.\u00a0227\u2013254, 2006, https:\/\/doi.org\/10.1146\/annurev.psych.57.102904.190100.","key":"2025053021205068381_j_itit-2025-0007_ref_023","DOI":"10.1146\/annurev.psych.57.102904.190100"},{"doi-asserted-by":"crossref","unstructured":"S. Bruckert, B. Finzel, and U. Schmid, \u201cThe next generation of medical decision support: a roadmap toward transparent expert companions,\u201d Front. Artif. Intell., vol.\u00a03, 2020, Art. no. 507973. [Online]. Available: https:\/\/www.frontiersin.org\/articles\/10.3389\/frai.2020.507973.","key":"2025053021205068381_j_itit-2025-0007_ref_024","DOI":"10.3389\/frai.2020.507973"},{"doi-asserted-by":"crossref","unstructured":"C. Rudin, C. Chen, Z. Chen, H. Huang, L. Semenova, and C. Zhong, \u201cInterpretable machine learning: fundamental principles and 10 grand challenges,\u201d Stat. Surv., vol.\u00a016, pp.\u00a01\u201385, 2022, https:\/\/doi.org\/10.1214\/21-ss133.","key":"2025053021205068381_j_itit-2025-0007_ref_025","DOI":"10.1214\/21-SS133"},{"doi-asserted-by":"crossref","unstructured":"U. Schmid and B. Finzel, \u201cMutual explanations for cooperative decision making in medicine,\u201d K\u00fcnstliche Intell., vol.\u00a034, no.\u00a02, pp.\u00a0227\u2013233, 2020. [Online]. Available: https:\/\/doi.org\/10.1007\/s13218-020-00633-2.","key":"2025053021205068381_j_itit-2025-0007_ref_026","DOI":"10.1007\/s13218-020-00633-2"},{"doi-asserted-by":"crossref","unstructured":"A. Mohammed, C. Geppert, A. Hartmann, P. Kuritcyn, V. Bruns, U. Schmid, \u201cExplaining and evaluating deep tissue classification by visualizing activations of most relevant intermediate layers,\u201d Current Dir. Biomed. Eng., vol.\u00a08, no.\u00a02, pp.\u00a0229\u2013232, 2022, https:\/\/doi.org\/10.1515\/cdbme-2022-1059.","key":"2025053021205068381_j_itit-2025-0007_ref_027","DOI":"10.1515\/cdbme-2022-1059"},{"doi-asserted-by":"crossref","unstructured":"G. Vilone and L. Longo, \u201cClassification of explainable artificial intelligence methods through their output formats,\u201d Mach. Learn. Knowl. Extr., vol.\u00a03, no.\u00a03, pp.\u00a0615\u2013661, 2021. [Online]. Available: https:\/\/doi.org\/10.3390\/make3030032.","key":"2025053021205068381_j_itit-2025-0007_ref_028","DOI":"10.3390\/make3030032"},{"doi-asserted-by":"crossref","unstructured":"Z. C. Lipton, \u201cThe mythos of model interpretability,\u201d Commun. ACM, vol.\u00a061, no.\u00a010, pp.\u00a036\u201343, 2018. [Online]. Available: https:\/\/doi.org\/10.1145\/3233231.","key":"2025053021205068381_j_itit-2025-0007_ref_029","DOI":"10.1145\/3233231"},{"unstructured":"B. Finzel, P. Hilme, J. Rabold, and U. Schmid, \u201cTelling more with concepts and relations: exploring and evaluating classifier decisions with CoReX,\u201d CoRR, vols. abs\/2405, p.\u00a001661, 2024.","key":"2025053021205068381_j_itit-2025-0007_ref_030"},{"unstructured":"B. Finzel, D. E. Tafler, A. M. Thaler, and U. Schmid, \u201cMultimodal explanations for user-centric medical decision support systems,\u201d in Proceedings Of the AAAI 2021 Fall Symposium on Human Partnership with Medical AI: Design, Operationalization, and Ethics (AAAI-HUMAN 2021), Virtual Event, November 4-6, 2021, ser. CEUR Workshop Proceedings, T. E. Doyle, A. Kelliher, R. Samavi, B. Barry, S. J. Yule, S. Parker, M. D. Noseworthy, and Q. Yang, Eds., vol.\u00a03068. CEUR-WS.org, 2021. [Online]. Available: https:\/\/ceur-ws.org\/Vol-3068\/short2.pdf.","key":"2025053021205068381_j_itit-2025-0007_ref_031"},{"doi-asserted-by":"crossref","unstructured":"L. Schallner, J. Rabold, O. Scholz, and U. Schmid, \u201cEffect of superpixel aggregation on explanations in LIME \u2013 a case study with biological data,\u201d in Machine Learning and Knowledge Discovery in Databases \u2013 International Workshops of ECML PKDD 2019, W\u00fcrzburg, Germany, September 16-20, 2019, Proceedings, Part I, Ser. Communications in Computer and Information Science, P. Cellier and K. Driessens, Eds., vol.\u00a01167. Springer, 2019, pp.\u00a0147\u2013158. [Online]. Available: https:\/\/doi.org\/10.1007\/978-3-030-43823-4_13.","key":"2025053021205068381_j_itit-2025-0007_ref_032","DOI":"10.1007\/978-3-030-43823-4_13"},{"doi-asserted-by":"crossref","unstructured":"A. Heimerl, K. Weitz, T. Baur, and E. Andr\u00e9, \u201cUnraveling ML models of emotion with NOVA: multi-level explainable AI for non-experts,\u201d IEEE Trans. Affect. Comput., vol.\u00a013, no.\u00a03, pp.\u00a01155\u20131167, 2022. [Online]. Available: https:\/\/doi.org\/10.1109\/TAFFC.2020.3043603.","key":"2025053021205068381_j_itit-2025-0007_ref_033","DOI":"10.1109\/TAFFC.2020.3043603"},{"unstructured":"I. Rieger, R. Kollmann, B. Finzel, D. Seuss, and U. Schmid, \u201cVerifying deep learning-based decisions for facial expression recognition,\u201din 28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2020, Bruges, Belgium, October 2-4, 2020, 2020, pp.\u00a0139\u2013144. [Online]. Available: https:\/\/www.esann.org\/sites\/default\/files\/proceedings\/2020\/ES2020-49.pdf.","key":"2025053021205068381_j_itit-2025-0007_ref_034"},{"doi-asserted-by":"crossref","unstructured":"I. Stepin, J. M. Alonso, A. Catal\u00e1, and M. Pereira-Fari\u00f1a, \u201cA survey of contrastive and counterfactual explanation generation methods for explainable artificial intelligence,\u201d IEEE Access., vol.\u00a09, pp.\u00a011 974\u201312 001, 2021. [Online]. Available: https:\/\/doi.org\/10.1109\/ACCESS.2021.3051315.","key":"2025053021205068381_j_itit-2025-0007_ref_035","DOI":"10.1109\/ACCESS.2021.3051315"},{"doi-asserted-by":"crossref","unstructured":"P. Lipton, \u201cInference to the best explanation,\u201d in A companion to the philosophy of science, W. Newton-Smith, Ed., Blackwell, 2000, pp.\u00a0184\u2013193.","key":"2025053021205068381_j_itit-2025-0007_ref_036","DOI":"10.1002\/9781405164481.ch29"},{"doi-asserted-by":"crossref","unstructured":"M. Nauta, A. Jutte, J. C. Provoost, and C. Seifert, \u201cThis looks like that, because \u2026explaining prototypes for interpretable image recognition,\u201d in Machine Learning and Principles and Practice of Knowledge Discovery in Databases \u2013 International Workshops of ECML PKDD 2021, Virtual Event, September 13-17, 2021, Proceedings, Part I, Ser. Communications in Computer and Information Science, M. Kamp, I. Koprinska, A. Bibal, T. Bouadi, B. Fr\u00e9nay, L. Gal\u00e1rraga, J. Oramas, L. Adilova, Y. Krishnamurthy, B. Kang, C. Largeron, J. Lijffijt, T. Viard, P. Welke, M. Ruocco, E. Aune, C. Gallicchio, G. Schiele, F. Pernkopf, M. Blott, H. Fr\u00f6ning, G. Schindler, R. Guidotti, A. Monreale, S. Rinzivillo, P. Biecek, E. Ntoutsi, M. Pechenizkiy, B. Rosenhahn, C. L. Buckley, D. Cialfi, P. Lanillos, M. Ramstead, T. Verbelen, P. M. Ferreira, G. Andresini, D. Malerba, I. Medeiros, P. Fournier-Viger, M. S. Nawaz, S. Ventura, M. Sun, M. Zhou, V. Bitetta, I. Bordino, A. Ferretti, F. Gullo, G. Ponti, L. Severini, R. P. Ribeiro, J. Gama, R. Gavald\u00e0, L. Cooper, N. Ghazaleh, J. Richiardi, D. Roqueiro, D. S. Miranda, K. Sechidis, and G. Gra\u00e7a, Eds., vol.\u00a01524. Springer, 2021, pp.\u00a0441\u2013456. [Online]. Available: https:\/\/doi.org\/10.1007\/978-3-030-93736-2_34.","key":"2025053021205068381_j_itit-2025-0007_ref_037","DOI":"10.1007\/978-3-030-93736-2_34"},{"doi-asserted-by":"crossref","unstructured":"M. Nauta, R. van Bree, and C. Seifert, \u201cNeural prototype trees for interpretable fine-grained image recognition,\u201din IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, Computer Vision Foundation\/IEEE, 2021, pp.\u00a014 933\u201314 943. [Online]. Available: https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/html\/Nauta_Neural_Prototype_Trees_for_Interpretable_Fine-Grained_Image_Recognition_CVPR_2021_paper.html.","key":"2025053021205068381_j_itit-2025-0007_ref_038","DOI":"10.1109\/CVPR46437.2021.01469"},{"unstructured":"B. Kim, O. Koyejo, and R. Khanna, \u201cExamples are not enough, learn to criticize! criticism for interpretability,\u201d in Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, D. D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, and R. Garnett, Barcelona, Spain, 2016, pp.\u00a02280\u20132288. [Online]. Available: https:\/\/proceedings.neurips.cc\/paper\/2016\/hash\/5680522b8e2bb01943234bce7bf84534-Abstract.html.","key":"2025053021205068381_j_itit-2025-0007_ref_039"},{"unstructured":"B. Kim, C. Rudin, and J. A. Shah, \u201cThe bayesian case model: a generative approach for case-based reasoning and prototype classification,\u201d in Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, Montreal, Quebec, Canada, 2014, pp.\u00a01952\u20131960. [Online]. Available: https:\/\/proceedings.neurips.cc\/paper\/2014\/hash\/390e982518a50e280d8e2b535462ec1f-Abstract.html.","key":"2025053021205068381_j_itit-2025-0007_ref_040"},{"doi-asserted-by":"crossref","unstructured":"F. S\u00f8rmo, J. Cassens, and A. Aamodt, \u201cExplanation in case-based reasoning-perspectives and goals,\u201d Artif. Intell. Rev., vol.\u00a024, no.\u00a02, pp.\u00a0109\u2013143, 2005. [Online]. Available: https:\/\/doi.org\/10.1007\/s10462-005-4607-7.","key":"2025053021205068381_j_itit-2025-0007_ref_041","DOI":"10.1007\/s10462-005-4607-7"},{"doi-asserted-by":"crossref","unstructured":"B. Finzel, J. Knoblach, A. M. Thaler, and U. Schmid, \u201cNear hit and near miss example explanations for model revision in binary image classification,\u201d in Intelligent Data Engineering and Automated Learning \u2013 IDEAL 2024 \u2013 25th International Conference, Valencia, Spain, November 20-22, 2024, Proceedings, Part II, Ser. Lecture Notes in Computer Science, V. Juli\u00e1n, D. Camacho, H. Yin, J. M. Alberola, V. B. Nogueira, P. Novais, and A. J. Tall\u00f3n-Ballesteros, Eds., vol.\u00a015347. Springer, 2024, pp.\u00a0260\u2013271. [Online]. Available: https:\/\/doi.org\/10.1007\/978-3-031-77738-7_22.","key":"2025053021205068381_j_itit-2025-0007_ref_042","DOI":"10.1007\/978-3-031-77738-7_22"},{"doi-asserted-by":"crossref","unstructured":"B. Finzel, S. P. Kuhn, D. E. Tafler, and U. Schmid, \u201cExplaining with attribute-based and relational near misses: an interpretable approach to distinguishing facial expressions of pain and disgust,\u201d in Inductive Logic Programming, S. H. Muggleton, and A. Tamaddoni-Nezhad, Eds., Cham, Springer Nature Switzerland, 2024, pp.\u00a040\u201351.","key":"2025053021205068381_j_itit-2025-0007_ref_043","DOI":"10.1007\/978-3-031-55630-2_4"},{"doi-asserted-by":"crossref","unstructured":"A. Poch\u00e9, L. Hervier, and M. C. Bakkay, \u201cNatural example-based explainability: a survey,\u201d in Explainable Artificial Intelligence \u2013 First World Conference, xAI 2023, Lisbon, Portugal, July 26-28, 2023, Proceedings, Part II, ser. Communications in Computer and Information Science, L. Longo, Ed., vol.\u00a01902. Springer, 2023, pp.\u00a024\u201347. [Online]. Available.","key":"2025053021205068381_j_itit-2025-0007_ref_044","DOI":"10.1007\/978-3-031-44067-0_2"},{"unstructured":"A. Bontempelli, S. Teso, K. Tentori, F. Giunchiglia, and A. Passerini, \u201cConcept-level debugging of part-prototype networks,\u201d in The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, OpenReview.net, 2023. [Online]. Available: https:\/\/openreview.net\/pdf?id=oiwXWPDTyNk.","key":"2025053021205068381_j_itit-2025-0007_ref_045"},{"unstructured":"B. Finzel, R. Kollmann, I. Rieger, J. Pahl, and U. Schmid, \u201cDeriving temporal prototypes from saliency map clusters for the analysis of deep-learning-based facial action unit classification,\u201d in Proceedings of the LWDA 2021 Workshops: FGWM, KDML, FGWI-BIA, and FGIR, Online, September 1-3, 2021, Ser. CEUR Workshop Proceedings, T. Seidl, M. Fromm, and S. Obermeier, Eds., vol.\u00a02993. CEUR-WS.org, 2021, pp.\u00a086\u201397. [Online]. Available: https:\/\/ceur-ws.org\/Vol-2993\/paper-09.pdf.","key":"2025053021205068381_j_itit-2025-0007_ref_046"},{"doi-asserted-by":"crossref","unstructured":"E. Rosch, \u201cWittgenstein and categorization research in cognitive psychology,\u201d in Meaning and the growth of understanding: Wittgenstein\u2019s significance for developmental psychology, M. Chapman, and R. A. Dixon, Eds., Springer, 1987, pp.\u00a0151\u2013166. [Online].","key":"2025053021205068381_j_itit-2025-0007_ref_047","DOI":"10.1007\/978-3-642-83023-5_9"},{"doi-asserted-by":"crossref","unstructured":"E. H. Rosch, \u201cNatural categories,\u201d Cogn. Psychol., vol.\u00a04, no.\u00a03, pp.\u00a0328\u2013350, 1973, https:\/\/doi.org\/10.1016\/0010-0285(73)90017-0.","key":"2025053021205068381_j_itit-2025-0007_ref_048","DOI":"10.1016\/0010-0285(73)90017-0"},{"doi-asserted-by":"crossref","unstructured":"B. Finzel, D. E. Tafler, S. Scheele, and U. Schmid, \u201cExplanation as a process: user-centric construction of multi-level and multi-modal explanations,\u201d in KI 2021: Advances in Artificial Intelligence \u2013 44th German Conference on AI, Virtual Event, September 27 \u2013 October 1, 2021, Proceedings, Ser. Lecture Notes in Computer Science, S. Edelkamp, R. M\u00f6ller, and E. Rueckert, Eds., vol.\u00a012873. Springer, 2021, pp.\u00a080\u201394, https:\/\/doi.org\/10.1007\/978-3-030-87626-5_7.","key":"2025053021205068381_j_itit-2025-0007_ref_049","DOI":"10.1007\/978-3-030-87626-5_7"},{"doi-asserted-by":"crossref","unstructured":"J. Lamy, B. D. Sekar, G. Gu\u00e9zennec, J. Bouaud, and B. S\u00e9roussi, \u201cExplainable artificial intelligence for breast cancer: a visual case-based reasoning approach,\u201d Artif. Intell. Med., vol.\u00a094, pp.\u00a042\u201353, 2019. [Online]. Available: https:\/\/doi.org\/10.1016\/j.artmed.2019.01.001.","key":"2025053021205068381_j_itit-2025-0007_ref_050","DOI":"10.1016\/j.artmed.2019.01.001"},{"unstructured":"E. Poeta, G. Ciravegna, E. Pastor, T. Cerquitelli, and E. Baralis, \u201cConcept-based explainable artificial intelligence: a survey,\u201d ArXiv Preprint arXiv:2312.12936, 2023.","key":"2025053021205068381_j_itit-2025-0007_ref_051"},{"doi-asserted-by":"crossref","unstructured":"T. Miller, \u201cExplainable AI is dead, long live explainable ai!: hypothesis-driven decision support using evaluative AI,\u201d in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, Chicago, IL, USA, June 12-15, 2023, ACM, 2023, pp.\u00a0333\u2013342. [Online]. Available.","key":"2025053021205068381_j_itit-2025-0007_ref_052","DOI":"10.1145\/3593013.3594001"},{"doi-asserted-by":"crossref","unstructured":"J. van der Waa, E. Nieuwburg, A. H. M. Cremers, and M. A. Neerincx, \u201cEvaluating XAI: a comparison of rule-based and example-based explanations,\u201d Artif. Intell., vol.\u00a0291, p.\u00a0103404, 2021. [Online]. Available: https:\/\/doi.org\/10.1016\/j.artint.2020.103404.","key":"2025053021205068381_j_itit-2025-0007_ref_053","DOI":"10.1016\/j.artint.2020.103404"},{"doi-asserted-by":"crossref","unstructured":"M. Salvi, S. Seoni, A. Campagner, A. Gertych, U. Acharya, F. Molinari, \u201cExplainability and uncertainty: two sides of the same coin for enhancing the interpretability of deep learning models in healthcare,\u201d Int. J. Med. Informatics, vol.\u00a0197, p.\u00a0105846, 2025. [Online]. Available: https:\/\/doi.org\/10.1016\/j.ijmedinf.2025.105846.","key":"2025053021205068381_j_itit-2025-0007_ref_054","DOI":"10.1016\/j.ijmedinf.2025.105846"},{"doi-asserted-by":"crossref","unstructured":"V. Kamakshi and N. C. Krishnan, \u201cExplainable image classification: the journey so far and the road ahead,\u201d AI, vol.\u00a04, no.\u00a03, pp.\u00a0620\u2013651, 2023. [Online]. Available:, https:\/\/doi.org\/10.3390\/ai4030033 https:\/\/www.mdpi.com\/2673-2688\/4\/3\/33.","key":"2025053021205068381_j_itit-2025-0007_ref_055","DOI":"10.3390\/ai4030033"},{"doi-asserted-by":"crossref","unstructured":"A. Holzinger, G. Langs, H. Denk, K. Zatloukal, and H. M\u00fcller, \u201cCausability and explainability of artificial intelligence in medicine,\u201d WIREs Data Min. Knowl. Discov., vol.\u00a09, no.\u00a04, 2019, Art. no. e1312. [Online]. Available: https:\/\/doi.org\/10.1002\/widm.1312.","key":"2025053021205068381_j_itit-2025-0007_ref_056","DOI":"10.1002\/widm.1312"},{"doi-asserted-by":"crossref","unstructured":"B. Mihaljevic, C. Bielza, and P. Larra\u00f1aga, \u201cBayesian networks for interpretable machine learning and optimization,\u201d Neurocomputing, vol.\u00a0456, pp.\u00a0648\u2013665, 2021. [Online]. Available: https:\/\/doi.org\/10.1016\/j.neucom.2021.01.138.","key":"2025053021205068381_j_itit-2025-0007_ref_057","DOI":"10.1016\/j.neucom.2021.01.138"},{"doi-asserted-by":"crossref","unstructured":"N. Rodis, C. Sardianos, P. I. Radoglou-Grammatikis, P. G. Sarigiannidis, I. Varlamis, and G. T. Papadopoulos, \u201cMultimodal explainable artificial intelligence: a comprehensive review of methodological advances and future research directions,\u201d IEEE Access., vol.\u00a012, pp.\u00a0159 794\u2013159 820, 2024. [Online]. Available: https:\/\/doi.org\/10.1109\/ACCESS.2024.3467062.","key":"2025053021205068381_j_itit-2025-0007_ref_058","DOI":"10.1109\/ACCESS.2024.3467062"},{"unstructured":"Y. Xuan, K. Sokol, M. Sanderson, and J. Chan, \u201cLeveraging complementary ai explanations to mitigate misunderstanding in xai,\u201d, 2025. [Online]. Available: https:\/\/arxiv.org\/abs\/2503.00303.","key":"2025053021205068381_j_itit-2025-0007_ref_059"},{"doi-asserted-by":"crossref","unstructured":"G. Lv, L. Chen, and C. C. Cao, \u201cOn glocal explainability of graph neural networks,\u201d in Database Systems for Advanced Applications \u2013 27th International Conference, DASFAA 2022, Virtual Event, April 11-14, 2022, Proceedings, Part I, Ser. Lecture Notes in Computer Science, A. Bhattacharya, J. Lee, M. Li, D. Agrawal, P. K. Reddy, M. K. Mohania, A. Mondal, V. Goyal, and R. U. Kiran, Eds., vol.\u00a013245. Springer, 2022, pp.\u00a0648\u2013664. [Online]. Available: https:\/\/doi.org\/10.1007\/978-3-031-00123-9_52.","key":"2025053021205068381_j_itit-2025-0007_ref_060","DOI":"10.1007\/978-3-031-00123-9_52"},{"doi-asserted-by":"crossref","unstructured":"D. Mindlin, F. Beer, L. N. Sieger, S. Heindorf, E. Esposito, A. C. Ngonga Ngomo, \u201cBeyond one-shot explanations: a systematic literature review of dialogue-based xai approaches,\u201d Artif. Intell. Rev., vol.\u00a058, no.\u00a03, p.\u00a081, 2025. [Online]. Available: https:\/\/doi.org\/10.1007\/s10462-024-11007-7.","key":"2025053021205068381_j_itit-2025-0007_ref_061","DOI":"10.1007\/s10462-024-11007-7"},{"doi-asserted-by":"crossref","unstructured":"K. Sokol and P. A. Flach, \u201cOne explanation does not fit all,\u201d K\u00fcnstliche Intell., vol.\u00a034, no.\u00a02, pp.\u00a0235\u2013250, 2020. [Online]. Available: https:\/\/doi.org\/10.1007\/s13218-020-00637-y.","key":"2025053021205068381_j_itit-2025-0007_ref_062","DOI":"10.1007\/s13218-020-00637-y"},{"doi-asserted-by":"crossref","unstructured":"S. Teso and K. Kersting, \u201cExplanatory interactive machine learning,\u201d in Proc. of the AAAI\/ACM AIES, V. Conitzer, G. K. Hadfield, and S. Vallor, Eds., ACM, 2019, pp.\u00a0239\u2013245. [Online].","key":"2025053021205068381_j_itit-2025-0007_ref_063","DOI":"10.1145\/3306618.3314293"},{"doi-asserted-by":"crossref","unstructured":"S. Teso, \u00d6. Alkan, W. Stammer, and E. Daly, \u201cLeveraging explanations in interactive machine learning: an overview,\u201d Front. Artif. Intell., vol.\u00a06, 2023, Art. no. 1066049. [Online]. Available: https:\/\/doi.org\/10.3389\/frai.2023.1066049.","key":"2025053021205068381_j_itit-2025-0007_ref_064","DOI":"10.3389\/frai.2023.1066049"},{"doi-asserted-by":"crossref","unstructured":"T. Dhar, N. Dey, S. Borra, and R. S. Sherratt, \u201cChallenges of deep learning in medical image analysis\u2013improving explainability and trust,\u201d IEEE Trans. Technol. Soc., vol.\u00a04, no.\u00a01, pp.\u00a068\u201375, 2023, https:\/\/doi.org\/10.1109\/tts.2023.3234203.","key":"2025053021205068381_j_itit-2025-0007_ref_065","DOI":"10.1109\/TTS.2023.3234203"},{"doi-asserted-by":"crossref","unstructured":"M. Ghassemi, L. Oakden-Rayner, and A. L. Beam, \u201cThe false hope of current approaches to explainable artificial intelligence in health care,\u201d The Lancet Digit. Health, vol.\u00a03, no.\u00a011, pp.\u00a0e745\u2013e750, 2021. [Online]. Available: https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2589750021002089.","key":"2025053021205068381_j_itit-2025-0007_ref_066","DOI":"10.1016\/S2589-7500(21)00208-9"},{"doi-asserted-by":"crossref","unstructured":"L. Famiglini, A. Campagner, M. Barandas, G. A. L. Maida, E. Gallazzi, and F. Cabitza, \u201cEvidence-based XAI: an empirical approach to design more effective and explainable decision support systems,\u201d Comput. Biol. Med., vol.\u00a0170, p.\u00a0108042, 2024. [Online]. Available: https:\/\/doi.org\/10.1016\/j.compbiomed.2024.108042.","key":"2025053021205068381_j_itit-2025-0007_ref_067","DOI":"10.1016\/j.compbiomed.2024.108042"},{"doi-asserted-by":"crossref","unstructured":"M. Nauta, J Trienes, S. Pathak, E. Nguyen, M. Peters, Y. Schmitt, \u201cFrom anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable ai,\u201d ACM Comput. Surv., vol.\u00a055, no.\u00a013s, pp.\u00a01\u201342, 2023. [Online]. Available: https:\/\/doi.org\/10.1145\/3583558.","key":"2025053021205068381_j_itit-2025-0007_ref_068","DOI":"10.1145\/3583558"},{"doi-asserted-by":"crossref","unstructured":"T. Schoonderwoerd, W. Jorritsma, M. A. Neerincx, and K. van den Bosch, \u201cHuman-centered XAI: developing design patterns for explanations of clinical decision support systems,\u201d Int. J. Hum. Comput. Stud., vol.\u00a0154, p.\u00a0102684, 2021. [Online]. Available: https:\/\/doi.org\/10.1016\/j.ijhcs.2021.102684.","key":"2025053021205068381_j_itit-2025-0007_ref_069","DOI":"10.1016\/j.ijhcs.2021.102684"},{"unstructured":"A. M. Thaler and U. Schmid, \u201cExplaining machine learned relational concepts in visual domains \u2013 effects of perceived accuracy on joint performance and trust,\u201d in Proceedings of the 43rd Annual Meeting of the Cognitive Science Society, CogSci 2021, virtual, July 26-29, 2021, W. T. Fitch, C. Lamm, H. Leder, and K. Te\u00dfmar-Raible, Eds., cognitivesciencesociety.org, 2021. [Online]. Available: https:\/\/escholarship.org\/uc\/item\/8wr7s491.","key":"2025053021205068381_j_itit-2025-0007_ref_070"},{"unstructured":"K. Weitz, An Interdisciplinary Concept for Human-Centered Explainable Artificial Intelligence \u2013 Investigating the Impact of Explainable AI on End-Users, Ph.D. dissertation, Germany, University of Augsburg, 2023. [Online]. Available: https:\/\/opus.bibliothek.uni-augsburg.de\/opus4\/frontdoor\/index\/index\/docId\/107511.","key":"2025053021205068381_j_itit-2025-0007_ref_071"},{"doi-asserted-by":"crossref","unstructured":"A. Suh, I. Hurley, N. Smith, and H. C. Siu, \u201cFewer than 1% of explainable ai papers validate explainability with humans,\u201d in CHI EA \u201925: Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, New York, NY, USA, ACM, 2025, pp.\u00a01\u20137. [Online]. Available: https:\/\/arxiv.org\/abs\/2503.16507.","key":"2025053021205068381_j_itit-2025-0007_ref_072","DOI":"10.1145\/3706599.3719964"},{"unstructured":"F. Doshi-Velez and B. Kim, \u201cTowards a rigorous science of interpretable machine learning,\u201d, 2017. [Online]. Available: https:\/\/arxiv.org\/abs\/1702.08608.","key":"2025053021205068381_j_itit-2025-0007_ref_073"},{"doi-asserted-by":"crossref","unstructured":"B. Finzel, I. Rieger, S. Kuhn, and U. Schmid, \u201cDomain-specific evaluation of visual explanations for application-grounded facial expression recognition,\u201d in Machine Learning and Knowledge Extraction \u2013 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 \u2013 September 1, 2023, Proceedings, Ser. Lecture Notes in Computer Science, A. Holzinger, P. Kieseberg, F. Cabitza, A. Campagner, A. M. Tjoa, and E. R. Weippl, Eds., vol.\u00a014065. Springer, 2023, pp.\u00a031\u201344. [Online]. Available: https:\/\/doi.org\/10.1007\/978-3-031-40837-3_3.","key":"2025053021205068381_j_itit-2025-0007_ref_074","DOI":"10.1007\/978-3-031-40837-3_3"},{"unstructured":"J. Adebayo, J. Gilmer, M. Muelly, I. J. Goodfellow, M. Hardt, and B. Kim, \u201cSanity checks for saliency maps,\u201d in Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., Montr\u00e9al, Canada, 2018, pp.\u00a09525\u20139536. [Online]. Available: https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/294a8ed24b1ad22ec2e7efea049b8737-Abstract.html.","key":"2025053021205068381_j_itit-2025-0007_ref_075"},{"doi-asserted-by":"crossref","unstructured":"Y. Gao, S. Gu, J. Jiang, S. R. Hong, D. Yu, and L. Zhao, \u201cGoing beyond xai: a systematic survey for explanation-guided learning,\u201d ACM Comput. Surv., vol.\u00a056, no.\u00a07, pp.\u00a01\u201339, 2024. [Online]. Available: https:\/\/doi.org\/10.1145\/3644073.","key":"2025053021205068381_j_itit-2025-0007_ref_076","DOI":"10.1145\/3644073"},{"doi-asserted-by":"crossref","unstructured":"I. Rieger, J. Pahl, B. Finzel, and U. Schmid, \u201cCorrloss: integrating co-occurrence domain knowledge for affect recognition,\u201d in 26th International Conference on Pattern Recognition, ICPR 2022, Montreal, QC, Canada, August 21-25, 2022, IEEE, 2022, pp.\u00a0798\u2013804. [Online].","key":"2025053021205068381_j_itit-2025-0007_ref_077","DOI":"10.1109\/ICPR56361.2022.9956319"},{"doi-asserted-by":"crossref","unstructured":"A. Mileo, \u201cTowards a neuro-symbolic cycle for human-centered explainability,\u201d Neurosymbolic Artif. Intell., vol.\u00a01, pp.\u00a01\u20139, 2023, preprint 691-1671. [Online]. Available: https:\/\/neurosymbolic-ai-journal.com\/paper\/towards-neuro-symbolic-cycle-human-centered-explainability.","key":"2025053021205068381_j_itit-2025-0007_ref_078","DOI":"10.3233\/NAI-240740"},{"doi-asserted-by":"crossref","unstructured":"A. Holzinger, M. Dehmer, F. Emmert-Streib, R. Cucchiara, I. Augenstein, J. D. Ser, \u201cInformation fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence,\u201d Inf. Fusion, vol.\u00a079, pp.\u00a0263\u2013278, 2022. [Online]. Available: https:\/\/doi.org\/10.1016\/j.inffus.2021.10.007.","key":"2025053021205068381_j_itit-2025-0007_ref_079","DOI":"10.1016\/j.inffus.2021.10.007"},{"doi-asserted-by":"crossref","unstructured":"D. Saraswat, P. Bhattacharya, A. Verma, V. K. Prasad, S. Tanwar, G. Sharma, \u201cExplainable AI for healthcare 5.0: opportunities and challenges,\u201d IEEE Access, vol.\u00a010, pp.\u00a084 486\u201384 517, 2022. [Online]. Available: https:\/\/doi.org\/10.1109\/ACCESS.2022.3197671.","key":"2025053021205068381_j_itit-2025-0007_ref_080","DOI":"10.1109\/ACCESS.2022.3197671"},{"doi-asserted-by":"crossref","unstructured":"R. Setchi, M. B. Dehkordi, and J. S. Khan, \u201cExplainable robotics in human-robot interactions,\u201d in Knowledge-Based and Intelligent Information & Engineering Systems: Proceedings of the 24th International Conference KES-2020, Virtual Event, 16-18 September 2020, Ser. Procedia Computer Science, M. Cristani, C. Toro, C. Zanni-Merk, R. J. Howlett, and L. C. Jain, Eds., vol.\u00a0176. Elsevier, 2020, pp.\u00a03057\u20133066. [Online]. Available: https:\/\/doi.org\/10.1016\/j.procs.2020.09.198.","key":"2025053021205068381_j_itit-2025-0007_ref_081","DOI":"10.1016\/j.procs.2020.09.198"},{"doi-asserted-by":"crossref","unstructured":"J. Beishuizen, \u201cStudying a complex knowledge domain by exploration or explanation,\u201d J. Comput. Assist. Learn., vol.\u00a08, no.\u00a02, pp.\u00a0104\u2013117, 1992, https:\/\/doi.org\/10.1111\/j.1365-2729.1992.tb00394.x.","key":"2025053021205068381_j_itit-2025-0007_ref_082","DOI":"10.1111\/j.1365-2729.1992.tb00394.x"},{"doi-asserted-by":"crossref","unstructured":"D. L. Langer, T. H. van der Kwast, A. J Evans, L. Sun, M. J Yaffe, J Trachtenberg, \u201cIntermixed normal tissue within prostate cancer: effect on mr imaging measurements of apparent diffusion coefficient and t2\u2013sparse versus dense cancers,\u201d Radiology, vol.\u00a0249, no.\u00a03, pp.\u00a0900\u2013908, 2008, https:\/\/doi.org\/10.1148\/radiol.2493080236.","key":"2025053021205068381_j_itit-2025-0007_ref_083","DOI":"10.1148\/radiol.2493080236"},{"unstructured":"W. Labov, \u201cThe boundaries of words and their meanings,\u201d N. Ways of Anal. Var. Engl., 1973, pp.\u00a0340\u2013371.","key":"2025053021205068381_j_itit-2025-0007_ref_084"},{"doi-asserted-by":"crossref","unstructured":"E. H\u00fcllermeier and W. Waegeman, \u201cAleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods,\u201d Mach. Learn., vol.\u00a0110, no.\u00a03, pp.\u00a0457\u2013506, 2021. [Online]. Available: https:\/\/doi.org\/10.1007\/s10994-021-05946-3.","key":"2025053021205068381_j_itit-2025-0007_ref_085","DOI":"10.1007\/s10994-021-05946-3"},{"doi-asserted-by":"crossref","unstructured":"J. Hern\u00e1ndez-Orallo, \u201cGazing into clever hans machines,\u201d Nat. Mach. Intell., vol.\u00a01, no.\u00a04, pp.\u00a0172\u2013173, 2019. [Online]. Available: https:\/\/doi.org\/10.1038\/s42256-019-0032-5.","key":"2025053021205068381_j_itit-2025-0007_ref_086","DOI":"10.1038\/s42256-019-0032-5"},{"doi-asserted-by":"crossref","unstructured":"M. H\u00e4gele, P. Seegerer, S. Lapuschkin, M. Bockmayr, W. Samek, F. Klauschen, \u201cResolving challenges in deep learning-based analyses of histopathological images using explanation methods,\u201d Sci. Rep., vol.\u00a010, no.\u00a01, p.\u00a06423, 2020, https:\/\/doi.org\/10.1038\/s41598-020-62724-2.","key":"2025053021205068381_j_itit-2025-0007_ref_087","DOI":"10.1038\/s41598-020-62724-2"},{"doi-asserted-by":"crossref","unstructured":"A. S. Ross, M. C. Hughes, and F. Doshi-Velez, \u201cRight for the right reasons: training differentiable models by constraining their explanations,\u201d in Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, C. Sierra, Ed., Ijcai.org, 2017, pp.\u00a02662\u20132670. [Online].","key":"2025053021205068381_j_itit-2025-0007_ref_088","DOI":"10.24963\/ijcai.2017\/371"},{"doi-asserted-by":"crossref","unstructured":"C. Collins, N. Andrienko, T. Schreck, J Yang, J Choo, U. Engelke, \u201cGuidance in the human-machine analytics process,\u201d Vis. Informatics, vol.\u00a02, no.\u00a03, pp.\u00a0166\u2013180, 2018. [Online]. Available: https:\/\/doi.org\/10.1016\/j.visinf.2018.09.003.","key":"2025053021205068381_j_itit-2025-0007_ref_089","DOI":"10.1016\/j.visinf.2018.09.003"},{"doi-asserted-by":"crossref","unstructured":"D. Ceneda, T. Gschwandtner, T. May, S. Miksch, H. J Schulz, M. Streit, \u201cCharacterizing guidance in visual analytics,\u201d IEEE Trans. Visual. Comput. Graph., vol.\u00a023, no.\u00a01, pp.\u00a0111\u2013120, 2017. [Online]. Available: https:\/\/doi.org\/10.1109\/TVCG.2016.2598468.","key":"2025053021205068381_j_itit-2025-0007_ref_090","DOI":"10.1109\/TVCG.2016.2598468"},{"doi-asserted-by":"crossref","unstructured":"A. Holzinger, A. Saranti, A. Angerschmid, B. Finzel, U. Schmid, and H. M\u00fcller, \u201cToward human-level concept learning: pattern benchmarking for AI algorithms,\u201d Patterns, vol.\u00a04, no.\u00a08, p.\u00a0100788, 2023. [Online]. Available: https:\/\/doi.org\/10.1016\/j.patter.2023.100788.","key":"2025053021205068381_j_itit-2025-0007_ref_091","DOI":"10.1016\/j.patter.2023.100788"},{"doi-asserted-by":"crossref","unstructured":"B. Finzel, A. Saranti, A. Angerschmid, D. E. Tafler, B. Pfeifer, and A. Holzinger, \u201cGenerating explanations for conceptual validation of graph neural networks: an investigation of symbolic predicates learned on relevance-ranked sub-graphs,\u201d K\u00fcnstliche Intell, vol.\u00a036, no.\u00a03, pp.\u00a0271\u2013285, 2022. [Online]. Available: https:\/\/doi.org\/10.1007\/s13218-022-00781-7.","key":"2025053021205068381_j_itit-2025-0007_ref_092","DOI":"10.1007\/s13218-022-00781-7"},{"doi-asserted-by":"crossref","unstructured":"Z. Zhang, L. Yilmaz, and B. Liu, \u201cA critical review of inductive logic programming techniques for explainable ai,\u201d IEEE Trans. Neural Networks and Learn. Syst., vol.\u00a035, no.\u00a08, pp.\u00a010 220\u201310 236, 2024, https:\/\/doi.org\/10.1109\/tnnls.2023.3246980.","key":"2025053021205068381_j_itit-2025-0007_ref_093","DOI":"10.1109\/TNNLS.2023.3246980"},{"doi-asserted-by":"crossref","unstructured":"S. H. Muggleton and L. D. Raedt, \u201cInductive logic programming: theory and methods,\u201d J. Log. Program., vols. 19\/20, pp.\u00a0629\u2013679, 1994. [Online]. Available: https:\/\/doi.org\/10.1016\/0743-1066(94)90035-3.","key":"2025053021205068381_j_itit-2025-0007_ref_094","DOI":"10.1016\/0743-1066(94)90035-3"},{"doi-asserted-by":"crossref","unstructured":"S. H. Muggleton, \u201cInductive logic programming,\u201d New Generat. Comput., vol.\u00a08, no.\u00a04, pp.\u00a0295\u2013318, 1991. [Online]. Available: https:\/\/doi.org\/10.1007\/BF03037089.","key":"2025053021205068381_j_itit-2025-0007_ref_095","DOI":"10.1007\/BF03037089"},{"unstructured":"G. Leech, N. Schoots, and J. Skalse, \u201cSafety properties of inductive logic programming,\u201d in Proceedings of the Workshop on Artificial Intelligence Safety 2021 (SafeAI 2021) Co-located with the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2021), Virtual, February 8, 2021, Ser. CEUR Workshop Proceedings, H. Espinoza, J. A. McDermid, X. Huang, M. Castillo-Effen, X. C. Chen, J. Hern\u00e1ndez-Orallo, S. \u00d3. h\u00c9igeartaigh, and R. Mallah, Eds., vol.\u00a02808. CEUR-WS.org, 2021. [Online]. Available: https:\/\/ceur-ws.org\/Vol-2808\/Paper_14.pdf.","key":"2025053021205068381_j_itit-2025-0007_ref_096"},{"doi-asserted-by":"crossref","unstructured":"A. Cropper, S. Dumancic, R. Evans, and S. H. Muggleton, \u201cInductive logic programming at 30,\u201d Mach. Learn., vol.\u00a0111, no.\u00a01, pp.\u00a0147\u2013172, 2022. [Online]. Available: https:\/\/doi.org\/10.1007\/s10994-021-06089-1.","key":"2025053021205068381_j_itit-2025-0007_ref_097","DOI":"10.1007\/s10994-021-06089-1"},{"doi-asserted-by":"crossref","unstructured":"A. Cropper and S. Dumancic, \u201cInductive logic programming at 30: a new introduction,\u201d J. Artif. Intell. Res., vol.\u00a074, pp.\u00a0765\u2013850, 2022. [Online]. Available: https:\/\/doi.org\/10.1613\/jair.1.13507.","key":"2025053021205068381_j_itit-2025-0007_ref_098","DOI":"10.1613\/jair.1.13507"},{"doi-asserted-by":"crossref","unstructured":"R. Morel and A. Cropper, \u201cLearning logic programs by explaining their failures,\u201d Mach. Learn., vol.\u00a0112, no.\u00a010, pp.\u00a03917\u20133943, 2023. [Online]. Available: https:\/\/doi.org\/10.1007\/s10994-023-06358-1.","key":"2025053021205068381_j_itit-2025-0007_ref_099","DOI":"10.1007\/s10994-023-06358-1"},{"doi-asserted-by":"crossref","unstructured":"J. Rabold, \u201cA neural-symbolic approach for explanation generation based on sub-concept detection: an application of metric learning for low-time-budget labeling,\u201d K\u00fcnstliche Intell, vol.\u00a036, no.\u00a03, pp.\u00a0225\u2013235, 2022. [Online]. Available: https:\/\/doi.org\/10.1007\/s13218-022-00771-9.","key":"2025053021205068381_j_itit-2025-0007_ref_100","DOI":"10.1007\/s13218-022-00771-9"},{"doi-asserted-by":"crossref","unstructured":"R. Manhaeve, S. Dumancic, A. Kimmig, T. Demeester, and L. D. Raedt, \u201cNeural probabilistic logic programming in deepproblog,\u201d Artif. Intell., vol.\u00a0298, p.\u00a0103504, 2021. [Online]. Available: https:\/\/doi.org\/10.1016\/j.artint.2021.103504.","key":"2025053021205068381_j_itit-2025-0007_ref_101","DOI":"10.1016\/j.artint.2021.103504"},{"doi-asserted-by":"crossref","unstructured":"J. Rabold, M. Siebers, and U. Schmid, \u201cExplaining black-box classifiers with ILP \u2013 empowering LIME with aleph to approximate non-linear decisions with relational rules,\u201d in Inductive Logic Programming \u2013 28th International Conference, ILP 2018, Ferrara, Italy, September 2-4, 2018, Proceedings, Ser. Lecture Notes in Computer Science, F. Riguzzi, E. Bellodi, and R. Zese, Eds., vol.\u00a011105. Springer, 2018, pp.\u00a0105\u2013117. [Online]. Available: https:\/\/doi.org\/10.1007\/978-3-319-99960-9_7.","key":"2025053021205068381_j_itit-2025-0007_ref_102","DOI":"10.1007\/978-3-319-99960-9_7"},{"doi-asserted-by":"crossref","unstructured":"A. N. Fadja, F. Riguzzi, and E. Lamma, \u201cLearning hierarchical probabilistic logic programs,\u201d Mach. Learn., vol.\u00a0110, no.\u00a07, pp.\u00a01637\u20131693, 2021. [Online]. Available: https:\/\/doi.org\/10.1007\/s10994-021-06016-4.","key":"2025053021205068381_j_itit-2025-0007_ref_103","DOI":"10.1007\/s10994-021-06016-4"},{"doi-asserted-by":"crossref","unstructured":"F. Riguzzi, E. Bellodi, and R. Zese, \u201cA history of probabilistic inductive logic programming,\u201d Front. Robot. AI, vol.\u00a01, p.\u00a06, 2014. [Online]. Available: https:\/\/doi.org\/10.3389\/frobt.2014.00006.","key":"2025053021205068381_j_itit-2025-0007_ref_104","DOI":"10.3389\/frobt.2014.00006"},{"unstructured":"A. Srinivasan, The Aleph Manual, 2007. [Online]. Available: https:\/\/www.cs.ox.ac.uk\/activities\/programinduction\/Aleph\/aleph.html.","key":"2025053021205068381_j_itit-2025-0007_ref_105"},{"unstructured":"Y. LeCun, et al.., \u201cHandwritten digit recognition with a back-propagation network,\u201d in Advances in Neural Information Processing Systems 2, [NIPS Conference, Denver, Colorado, USA, November 27-30, 1989], D. S. Touretzky, Ed., Morgan Kaufmann, 1989, pp.\u00a0396\u2013404. [Online]. Available: http:\/\/papers.nips.cc\/paper\/293-handwritten-digit-recognition-with-a-back-propagation-network.","key":"2025053021205068381_j_itit-2025-0007_ref_106"},{"doi-asserted-by":"crossref","unstructured":"R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, \u201cGrad-cam: visual explanations from deep networks via gradient-based localization,\u201d Int. J. Comput. Vis., vol.\u00a0128, no.\u00a02, pp.\u00a0336\u2013359, 2020. [Online]. Available: https:\/\/doi.org\/10.1007\/s11263-019-01228-7.","key":"2025053021205068381_j_itit-2025-0007_ref_107","DOI":"10.1007\/s11263-019-01228-7"},{"unstructured":"A. D. Santis, R. Campi, M. Bianchi, and M. Brambilla, \u201cVisual-tcav: concept-based attribution and saliency maps for post-hoc explainability in image classification,\u201d CoRR, vols. abs\/2411, no.\u00a005698, 2024. [Online]. Available: https:\/\/doi.org\/10.48550\/arXiv.2411.05698.","key":"2025053021205068381_j_itit-2025-0007_ref_108"},{"doi-asserted-by":"crossref","unstructured":"G. Montavon, A. Binder, S. Lapuschkin, W. Samek, and K. M\u00fcller, \u201cLayer-wise relevance propagation: an overview,\u201d in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Ser. Lecture Notes in Computer Science, vol.\u00a011700, Springer, 2019, pp.\u00a0193\u2013209. [Online].","key":"2025053021205068381_j_itit-2025-0007_ref_109","DOI":"10.1007\/978-3-030-28954-6_10"},{"unstructured":"R. Achtibat, et al.., \u201cFrom \u201cwhere\u201d to \u201cwhat\u201d: towards human-understandable explanations through concept relevance propagation,\u201d CoRR, vols. abs\/2206, p.\u00a003208, 2022.","key":"2025053021205068381_j_itit-2025-0007_ref_110"},{"doi-asserted-by":"crossref","unstructured":"R. Achtibat, M. Dreyer, I. Eisenbraun, S. Bosse, T. Wiegand, W. Samek, \u201cFrom attribution maps to human-understandable explanations through concept relevance propagation,\u201d Nat. Mach. Intell., vol.\u00a05, no.\u00a09, pp.\u00a01006\u20131019, 2023, https:\/\/doi.org\/10.1038\/s42256-023-00711-8.","key":"2025053021205068381_j_itit-2025-0007_ref_111","DOI":"10.1038\/s42256-023-00711-8"},{"unstructured":"L. Van der Maaten and G. Hinton, \u201cVisualizing data using t-sne,\u201d J. Mach. Learn. Res., vol.\u00a09, no.\u00a011, 2008.","key":"2025053021205068381_j_itit-2025-0007_ref_112"},{"doi-asserted-by":"crossref","unstructured":"S. Lapuschkin, S. W\u00e4ldchen, A. Binder, G. Montavon, W. Samek, and K.-R. M\u00fcller, \u201cUnmasking clever hans predictors and assessing what machines really learn,\u201d Nat. Commun., vol.\u00a010, no.\u00a01, p.\u00a01096, 2019. [Online]. Available: https:\/\/doi.org\/10.1038\/s41467-019-08987-4.","key":"2025053021205068381_j_itit-2025-0007_ref_113","DOI":"10.1038\/s41467-019-08987-4"},{"unstructured":"P. H. Winston, \u201cLearning structural descriptions from examples,\u201d in The Psychology of Computer Vision, McGraw-Hill, 1975, pp.\u00a0157\u2013210.","key":"2025053021205068381_j_itit-2025-0007_ref_114"},{"unstructured":"A. Beckwith, \u201cCs peirce and abduction inference,\u201d JCCC Honors J., vol.\u00a010, no.\u00a01, p.\u00a02, 2019.","key":"2025053021205068381_j_itit-2025-0007_ref_115"},{"unstructured":"I. Douven, \u201cAbduction (with supplement:\u2019peirce on abduction\u2019),\u201d in The Stanford Encyclopedia of Philosophy, Metaphysics Research Lab, Stanford University, 2011.","key":"2025053021205068381_j_itit-2025-0007_ref_116"},{"unstructured":"C. S. Peirce, \u201cPragmatism and abduction\u201d lecture,\u201d in The Collected Papers of Charles Sanders Peirce. Harvard University, vol.\u00a05, Cambridge, MA, Pragmatism and Pragmaticism, Harvard University Press, 1934, pp.\u00a0180\u2013212.","key":"2025053021205068381_j_itit-2025-0007_ref_117"},{"unstructured":"F. Ilievski, et al.., \u201cAligning generalisation between humans and machines,\u201d CoRR, vols. abs\/2411, no.\u00a015626, 2024. [Online]. Available: https:\/\/doi.org\/10.48550\/arXiv.2411.15626.","key":"2025053021205068381_j_itit-2025-0007_ref_118"},{"doi-asserted-by":"crossref","unstructured":"F. Heintz, M. Milano, and B. O\u2019Sullivan, Eds.in Trustworthy AI \u2013 Integrating Learning, Optimization and Reasoning \u2013 First International Workshop, TAILOR 2020, Virtual Event, September 4-5, 2020, Revised Selected Papers, ser. Lecture Notes in Computer Science, vol.\u00a012641, Springer, 2021. [Online]. Available.","key":"2025053021205068381_j_itit-2025-0007_ref_119","DOI":"10.1007\/978-3-030-73959-1"},{"unstructured":"A. Srinivasan, L. Vig, and M. Bain, \u201cLogical explanations for deep relational machines using relevance information,\u201d J. Mach. Learn. Res., vol.\u00a020, pp.\u00a0130:1\u2013130:47, 2019. [Online]. Available: https:\/\/jmlr.org\/papers\/v20\/18-517.html.","key":"2025053021205068381_j_itit-2025-0007_ref_120"},{"doi-asserted-by":"crossref","unstructured":"B. Finzel, \u201cCurrent methods in explainable artificial intelligence and future prospects for integrative physiology,\u201d Pfl\u00fcgers Archiv. \u2013 Europ. J. Physiol., vol.\u00a0477, pp.\u00a0513\u2013529, 2025. [Online]. Available: https:\/\/doi.org\/10.1007\/s00424-025-03067-7.","key":"2025053021205068381_j_itit-2025-0007_ref_121","DOI":"10.1007\/s00424-025-03067-7"},{"doi-asserted-by":"crossref","unstructured":"T. Dash, S. Chitlangia, A. Ahuja, and A. Srinivasan, \u201cA review of some techniques for inclusion of domain-knowledge into deep neural networks,\u201d Sci. Rep., vol.\u00a012, no.\u00a01, p.\u00a01040, 2022. [Online]. Available: https:\/\/doi.org\/10.1038\/s41598-021-04590-0.","key":"2025053021205068381_j_itit-2025-0007_ref_122","DOI":"10.1038\/s41598-021-04590-0"},{"doi-asserted-by":"crossref","unstructured":"S. Vadera and S. Ameen, \u201cMethods for pruning deep neural networks,\u201d IEEE Access, vol.\u00a010, pp.\u00a063 280\u201363 300, 2022. [Online]. Available: https:\/\/doi.org\/10.1109\/ACCESS.2022.3182659.","key":"2025053021205068381_j_itit-2025-0007_ref_123","DOI":"10.1109\/ACCESS.2022.3182659"},{"doi-asserted-by":"crossref","unstructured":"Q. Guo, X. Wu, J. Kittler, and Z. Feng, \u201cSelf-grouping convolutional neural networks,\u201d Neural Netw., vol.\u00a0132, pp.\u00a0491\u2013505, 2020. [Online]. Available: https:\/\/doi.org\/10.1016\/j.neunet.2020.09.015.","key":"2025053021205068381_j_itit-2025-0007_ref_124","DOI":"10.1016\/j.neunet.2020.09.015"},{"doi-asserted-by":"crossref","unstructured":"W. C. Zimmerli, \u201cAnalog oder digital? philosophieren nach dem ende der philosophie,\u201d in Was Ist Digitalit\u00e4t? Philosophische Und P\u00e4dagogische Perspektiven, U. Hauck-Thum, and J. Noller, Eds., Springer Berlin Heidelberg, 2021, pp.\u00a09\u201333.","key":"2025053021205068381_j_itit-2025-0007_ref_125","DOI":"10.1007\/978-3-662-62989-5_2"},{"doi-asserted-by":"crossref","unstructured":"E. Margolis and S. Laurence, \u201cThe ontology of concepts\u2013abstract objects or mental representations?\u201d No\u00fbs, vol.\u00a041, no.\u00a04, pp.\u00a0561\u2013593, 2007. [Online]. Available: https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1111\/j.1468-0068.2007.00663.x.","key":"2025053021205068381_j_itit-2025-0007_ref_126","DOI":"10.1111\/j.1468-0068.2007.00663.x"},{"unstructured":"R. Liu, et al.., \u201cAn intriguing failing of convolutional neural networks and the coordconv solution,\u201d in Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., Montr\u00e9al, Canada, 2018, pp.\u00a09628\u20139639. [Online]. Available: https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/60106888f8977b71e1f15db7bc9a88d1-Abstract.html.","key":"2025053021205068381_j_itit-2025-0007_ref_127"},{"unstructured":"J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. A. Riedmiller, \u201cStriving for simplicity: the all convolutional net,\u201d in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings, Y. Bengio, and Y. LeCun, Eds., 2015. [Online]. Available: http:\/\/arxiv.org\/abs\/1412.6806.","key":"2025053021205068381_j_itit-2025-0007_ref_128"},{"unstructured":"D. L. Silver and T. M. Mitchell, \u201cThe roles of symbols in neural-based AI: they are not what you think!\u201d in Proceedings of the 17th International Workshop on Neural-Symbolic Learning and Reasoning, La Certosa di Pontignano, Siena, Italy, July 3-5, 2023, Ser. CEUR Workshop Proceedings, A. S. d\u2019Avila Garcez, T. R. Besold, M. Gori, and E. Jim\u00e9nez-Ruiz, Eds., vol.\u00a03432. CEUR-WS.org, 2023, pp.\u00a0420\u2013421. [Online]. Available: https:\/\/ceur-ws.org\/Vol-3432\/paper40.pdf.","key":"2025053021205068381_j_itit-2025-0007_ref_129"},{"doi-asserted-by":"crossref","unstructured":"A. Leventi-Peetz and K. Weber, \u201cRashomon effect and consistency in explainable artificial intelligence (XAI),\u201d in Proceedings of the Future Technologies Conference, FTC 2022, Virtual Event, 20-21 October 2022, Volume 1, ser. Lecture Notes in Networks and Systems, K. Arai, Ed., vol. 559. Springer, 2022, pp.\u00a0796\u2013808. [Online]. Available.","key":"2025053021205068381_j_itit-2025-0007_ref_130","DOI":"10.1007\/978-3-031-18461-1_52"}],"container-title":["it - Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.degruyterbrill.com\/document\/doi\/10.1515\/itit-2025-0007\/xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.degruyterbrill.com\/document\/doi\/10.1515\/itit-2025-0007\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,5,30]],"date-time":"2025-05-30T21:21:22Z","timestamp":1748640082000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.degruyterbrill.com\/document\/doi\/10.1515\/itit-2025-0007\/html"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,2,1]]},"references-count":130,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,4,24]]},"published-print":{"date-parts":[[2025,2,25]]}},"alternative-id":["10.1515\/itit-2025-0007"],"URL":"https:\/\/doi.org\/10.1515\/itit-2025-0007","relation":{},"ISSN":["1611-2776","2196-7032"],"issn-type":[{"type":"print","value":"1611-2776"},{"type":"electronic","value":"2196-7032"}],"subject":[],"published":{"date-parts":[[2025,2,1]]}}}