{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,4]],"date-time":"2026-05-04T05:53:48Z","timestamp":1777874028667,"version":"3.51.4"},"reference-count":108,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2024,11,7]],"date-time":"2024-11-07T00:00:00Z","timestamp":1730937600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Austrian Science Fund","award":["P-32554"],"award-info":[{"award-number":["P-32554"]}]},{"name":"Juan de la Cierva Incorporaci\u00f3n","award":["IJC2019-039152-I"],"award-info":[{"award-number":["IJC2019-039152-I"]}]},{"name":"\u201cESF Investing in your future\u201d, a MSCA Postdoctoral Fellowship","award":["101059332"],"award-info":[{"award-number":["101059332"]}]},{"name":"Google Research Scholar Program, and a 2022 Leonardo Grant for Researchers and Cultural Creators from BBVA Foundation"},{"name":"European Union\u2019s Horizon 2020 research and innovation programme","award":["765955"],"award-info":[{"award-number":["765955"]}]},{"name":"European Union\u2019s Horizon 2020 research and innovation programme","award":["826078"],"award-info":[{"award-number":["826078"]}]},{"name":"PNRR project INEST - Interconnected North-East Innovation Ecosystem","award":["ECS00000043"],"award-info":[{"award-number":["ECS00000043"]}]},{"name":"PNRR project FAIR - Future AI Research","award":["PE00000013"],"award-info":[{"award-number":["PE00000013"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2025,2,28]]},"abstract":"<jats:p>The past years have been characterized by an upsurge in opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although DNNs have great generalization and prediction abilities, it is difficult to obtain detailed explanations for their behavior. As opaque Machine Learning models are increasingly being employed to make important predictions in critical domains, there is a danger of creating and using decisions that are not justifiable or legitimate. Therefore, there is a general agreement on the importance of endowing DNNs with explainability. EXplainable Artificial Intelligence (XAI) techniques can serve to verify and certify model outputs and enhance them with desirable notions such as trustworthiness, accountability, transparency, and fairness. This guide is intended to be the go-to handbook for anyone with a computer science background aiming to obtain an intuitive insight from Machine Learning models accompanied by explanations out-of-the-box. The article aims to rectify the lack of a practical XAI guide by applying XAI techniques, in particular, day-to-day models, datasets and use-cases. In each chapter, the reader will find a description of the proposed method as well as one or several examples of use with Python notebooks. These can be easily modified to be applied to specific applications. We also explain what the prerequisites are for using each technique, what the user will learn about them, and which tasks they are aimed at.<\/jats:p>","DOI":"10.1145\/3670685","type":"journal-article","created":{"date-parts":[[2024,6,12]],"date-time":"2024-06-12T11:11:49Z","timestamp":1718190709000},"page":"1-44","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":78,"title":["A Practical Tutorial on Explainable AI Techniques"],"prefix":"10.1145","volume":"57","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8232-8728","authenticated-orcid":false,"given":"Adrien","family":"Bennetot","sequence":"first","affiliation":[{"name":"Sorbonne Universit\u00e9, Paris, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0701-5729","authenticated-orcid":false,"given":"Ivan","family":"Donadello","sequence":"additional","affiliation":[{"name":"Free University of Bozen-Bolzano, Bolzano, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7296-1252","authenticated-orcid":false,"given":"Ayoub","family":"El Qadi El Haouari","sequence":"additional","affiliation":[{"name":"Sorbonne Universite, Paris, France and Tinubu Square, Paris, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0380-6571","authenticated-orcid":false,"given":"Mauro","family":"Dragoni","sequence":"additional","affiliation":[{"name":"Fondazione Bruno Kessler, Trento, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0009-0005-6388-5672","authenticated-orcid":false,"given":"Thomas","family":"Frossard","sequence":"additional","affiliation":[{"name":"Tinubu Square, Paris, France"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-6747-1862","authenticated-orcid":false,"given":"Benedikt","family":"Wagner","sequence":"additional","affiliation":[{"name":"City University of London, London, United Kingdom of Great Britain and Northern Ireland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1085-8428","authenticated-orcid":false,"given":"Anna","family":"Sarranti","sequence":"additional","affiliation":[{"name":"University of Natural Resources and Life Sciences Vienna, Wien, Austria"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6826-370X","authenticated-orcid":false,"given":"Silvia","family":"Tulli","sequence":"additional","affiliation":[{"name":"Sorbonne Universite, Paris, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6241-0126","authenticated-orcid":false,"given":"Maria","family":"Trocan","sequence":"additional","affiliation":[{"name":"Institut Sup\u00e9rieur d'\u00c9lectronique de Paris (ISEP), Paris, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7822-0634","authenticated-orcid":false,"given":"Raja","family":"Chatila","sequence":"additional","affiliation":[{"name":"Sorbonne Universite, Paris, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6786-5194","authenticated-orcid":false,"given":"Andreas","family":"Holzinger","sequence":"additional","affiliation":[{"name":"University of Natural Resources and Life Sciences Vienna, Wien, Austria and Institute for Medical Informatics, Medical University Graz, Graz, Austria"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7375-9518","authenticated-orcid":false,"given":"Artur","family":"Davila Garcez","sequence":"additional","affiliation":[{"name":"City University, London, United Kingdom of Great Britain and Northern Ireland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3362-9326","authenticated-orcid":false,"given":"Natalia","family":"D\u00edaz-Rodr\u00edguez","sequence":"additional","affiliation":[{"name":"University of Granada, Granada, Spain"}]}],"member":"320","published-online":{"date-parts":[[2024,11,7]]},"reference":[{"key":"e_1_3_3_2_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.dajour.2023.100230"},{"key":"e_1_3_3_3_2","volume-title":"Proceedings of the 4th International Conference on Pattern Recognition Applications and Methods","author":"Abu-Aisheh Zeina","year":"2015","unstructured":"Zeina Abu-Aisheh, Romain Raveaux, Jean-Yves Ramel, and Patrick Martineau. 2015. An exact graph edit distance algorithm for solving pattern recognition problems. In Proceedings of the 4th International Conference on Pattern Recognition Applications and Methods."},{"key":"e_1_3_3_4_2","first-page":"9505","volume-title":"Proceedings of the Conference on Advances in Neural Information Processing Systems","author":"Adebayo Julius","year":"2018","unstructured":"Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. In Proceedings of the Conference on Advances in Neural Information Processing Systems. 9505\u20139515."},{"issue":"978","key":"e_1_3_3_5_2","first-page":"3","article-title":"Neural networks and deep learning","volume":"10","author":"Aggarwal Charu C.","year":"2018","unstructured":"Charu C. Aggarwal. 2018. Neural networks and deep learning. Springer 10, 978 (2018), 3.","journal-title":"Springer"},{"key":"e_1_3_3_6_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2023.101805"},{"key":"e_1_3_3_7_2","article-title":"Explainable artificial intelligence: An analytical review","volume":"11","author":"Angelov Plamen P.","year":"2021","unstructured":"Plamen P. Angelov, Eduardo Almeida Soares, Richard Jiang, Nicholas I. Arnold, and Peter M. Atkinson. 2021. Explainable artificial intelligence: An analytical review. Wiley Interdiscip. Rev.: Data Min. Knowl. Discov. 11 (2021). Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:236501382","journal-title":"Wiley Interdiscip. Rev.: Data Min. Knowl. Discov."},{"key":"e_1_3_3_8_2","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/W17-5221","article-title":"Explaining recurrent neural network predictions in sentiment analysis","author":"Arras Leila","year":"2017","unstructured":"Leila Arras, Gr\u00e9goire Montavon, Klaus-Robert M\u00fcller, and Wojciech Samek. 2017. Explaining recurrent neural network predictions in sentiment analysis. In EMNLP\u201917 Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA\u201917).","journal-title":"EMNLP\u201917 Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA\u201917)"},{"key":"e_1_3_3_9_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2019.12.012"},{"key":"e_1_3_3_10_2","article-title":"Logic tensor networks","author":"Badreddine Samy","year":"2020","unstructured":"Samy Badreddine, Artur d\u2019Avila Garcez, Luciano Serafini, and Michael Spranger. 2020. Logic tensor networks. arXiv preprint arXiv:2012.13635 (2020).","journal-title":"arXiv preprint arXiv:2012.13635"},{"key":"e_1_3_3_11_2","doi-asserted-by":"publisher","DOI":"10.5555\/1756006.1859912"},{"key":"e_1_3_3_12_2","doi-asserted-by":"publisher","unstructured":"Jacqueline Michelle Metsch Anna Saranti Alessa Angerschmid Bastian Pfeifer Vanessa Klemt Andreas Holzinger and Anne-Christin Hauschild. 2024. CLARUS: An interactive explainable AI platform for manual counterfactuals in graph neural networks. Journal of Biomedical Informatics 150 (2024) 104600. 10.1016\/j.jbi.2024.104600","DOI":"10.1016\/j.jbi.2024.104600"},{"key":"e_1_3_3_13_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2022.109947"},{"issue":"1","key":"e_1_3_3_14_2","first-page":"D267\u2013D270","article-title":"The unified medical language system (UMLS): Integrating biomedical terminology","volume":"32","author":"Bodenreider Olivier","year":"2004","unstructured":"Olivier Bodenreider. 2004. The unified medical language system (UMLS): Integrating biomedical terminology. Nucleic Acids Res. 32, suppl_1 (2004), D267\u2013D270.","journal-title":"Nucleic Acids Res."},{"key":"e_1_3_3_15_2","doi-asserted-by":"publisher","DOI":"10.1017\/CBO9780511804441"},{"key":"e_1_3_3_16_2","doi-asserted-by":"publisher","DOI":"10.1023\/A:1010933404324"},{"key":"e_1_3_3_17_2","doi-asserted-by":"publisher","DOI":"10.3389\/frai.2020.00026"},{"key":"e_1_3_3_18_2","volume-title":"Proceedings of the International Conference on Machine Learning (ICML\u201921)","author":"Camburu Oana-Maria","year":"2021","unstructured":"Oana-Maria Camburu and Z. Akata. 2021. Natural-XAI: Explainable AI with natural language explanations. In Proceedings of the International Conference on Machine Learning (ICML\u201921)."},{"key":"e_1_3_3_19_2","doi-asserted-by":"publisher","DOI":"10.3390\/electronics8080832"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.1613\/jair.953"},{"key":"e_1_3_3_21_2","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939785"},{"key":"e_1_3_3_22_2","doi-asserted-by":"publisher","DOI":"10.3390\/electronics12163407"},{"issue":"4","key":"e_1_3_3_23_2","doi-asserted-by":"crossref","first-page":"1135","DOI":"10.1007\/s13347-021-00474-3","article-title":"Companies committed to responsible AI: From principles towards implementation and regulation?","volume":"34","author":"Laat Paul B de","year":"2021","unstructured":"Paul B de Laat. 2021. Companies committed to responsible AI: From principles towards implementation and regulation? Philos. Technol. 34, 4 (2021), 1135\u20131193.","journal-title":"Philos. Technol."},{"key":"e_1_3_3_24_2","doi-asserted-by":"crossref","first-page":"119898","DOI":"10.1016\/j.ins.2023.119898","article-title":"On generating trustworthy counterfactual explanations","volume":"655","author":"Ser Javier Del","year":"2024","unstructured":"Javier Del Ser, Alejandro Barredo-Arrieta, Natalia D\u00edaz-Rodr\u00edguez, Francisco Herrera, Anna Saranti, and Andreas Holzinger. 2024. On generating trustworthy counterfactual explanations. Inf. Sci. 655 (2024), 119898.","journal-title":"Inf. Sci."},{"key":"e_1_3_3_25_2","volume-title":"Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT\u201919)","author":"Devlin J.","year":"2019","unstructured":"J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT\u201919)."},{"key":"e_1_3_3_26_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2023.103276"},{"key":"e_1_3_3_27_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2023.101896"},{"key":"e_1_3_3_28_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2021.09.022"},{"key":"e_1_3_3_29_2","first-page":"317","volume-title":"Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization","author":"D\u00edaz-Rodr\u00edguez Natalia","year":"2020","unstructured":"Natalia D\u00edaz-Rodr\u00edguez and Galena Pisoni. 2020. Accessible cultural heritage through explainable artificial intelligence. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. 317\u2013324."},{"key":"e_1_3_3_30_2","series-title":"CEUR Workshop Proceedings","first-page":"46","volume-title":"PROFILES\/SEMEX@ISWC","volume":"2465","author":"Donadello Ivan","year":"2019","unstructured":"Ivan Donadello, Mauro Dragoni, and Claudio Eccher. 2019. Persuasive explanation of reasoning inferences on dietary data. In PROFILES\/SEMEX@ISWC(CEUR Workshop Proceedings, Vol. 2465). CEUR-WS.org, 46\u201361."},{"key":"e_1_3_3_31_2","series-title":"CEUR Workshop Proceedings","first-page":"1","volume-title":"CEx@AI*IA","volume":"2071","author":"Doran Derek","year":"2017","unstructured":"Derek Doran, Sarah Schulz, and Tarek R. Besold. 2017. What does explainable AI really mean? A new conceptualization of perspectives. In CEx@AI*IA(CEUR Workshop Proceedings, Vol. 2071). CEUR-WS.org, 1\u20138."},{"key":"e_1_3_3_32_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.is.2022.102162"},{"key":"e_1_3_3_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/2090236.2090255"},{"key":"e_1_3_3_34_2","doi-asserted-by":"crossref","first-page":"769","DOI":"10.1007\/978-3-031-41456-5_58","volume-title":"Proceedings of the International Conference on Computational Collective Intelligence","author":"El-Qadi Ayoub","year":"2023","unstructured":"Ayoub El-Qadi, Maria Trocan, Patricia Conde-Cespedes, Thomas Frossard, and Natalia D\u00edaz-Rodr\u00edguez. 2023. Credit risk scoring using a data fusion approach. In Proceedings of the International Conference on Computational Collective Intelligence. Springer, 769\u2013781."},{"key":"e_1_3_3_35_2","first-page":"226","volume-title":"ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD\u201996)","author":"Ester Martin","year":"1996","unstructured":"Martin Ester, Hans-Peter Kriegel, J\u00f6rg Sander, and Xiaowei Xu. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD\u201996). 226\u2013231."},{"issue":"2","key":"e_1_3_3_36_2","doi-asserted-by":"crossref","first-page":"29","DOI":"10.1109\/MC.2022.3212091","article-title":"Trustworthy artificial intelligence requirements in the autonomous driving domain","volume":"56","author":"Fernandez-Llorca David","year":"2023","unstructured":"David Fernandez-Llorca and Emilia G\u00f3mez. 2023. Trustworthy artificial intelligence requirements in the autonomous driving domain. Computer 56, 2 (2023), 29\u201339.","journal-title":"Computer"},{"key":"e_1_3_3_37_2","doi-asserted-by":"publisher","DOI":"10.1145\/3204949.3208125"},{"key":"e_1_3_3_38_2","doi-asserted-by":"publisher","DOI":"10.1056\/NEJMp1607591"},{"key":"e_1_3_3_39_2","doi-asserted-by":"publisher","DOI":"10.1080\/08839510601117169"},{"key":"e_1_3_3_40_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_3_41_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2021.10.007"},{"key":"e_1_3_3_42_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2021.01.008"},{"key":"e_1_3_3_43_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-40837-3_4"},{"key":"e_1_3_3_44_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2021.106916"},{"key":"e_1_3_3_45_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2022.10.010"},{"key":"e_1_3_3_46_2","article-title":"GCI: A (G) raph (C) oncept (I) nterpretation Framework","author":"Kazhdan Dmitry","year":"2023","unstructured":"Dmitry Kazhdan, Botty Dimanov, Lucie Charlotte Magister, Pietro Barbiero, Mateja Jamnik, and Pietro Lio. 2023. GCI: A (G) raph (C) oncept (I) nterpretation Framework. arXiv preprint arXiv:2302.04899 (2023).","journal-title":"arXiv preprint arXiv:2302.04899"},{"key":"e_1_3_3_47_2","article-title":"Semi-supervised classification with graph convolutional networks","author":"Kipf Thomas N.","year":"2016","unstructured":"Thomas N. Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).","journal-title":"arXiv preprint arXiv:1609.02907"},{"key":"e_1_3_3_48_2","volume-title":"Proceedings of the Conference on Neural Information Processing Systems (NIPS\u201915)","author":"Kiros Ryan","year":"2015","unstructured":"Ryan Kiros, Yukun Zhu, R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler. 2015. Skip-thought vectors. In Proceedings of the Conference on Neural Information Processing Systems (NIPS\u201915)."},{"key":"e_1_3_3_49_2","article-title":"Captum: A unified and generic model interpretability library for PyTorch","volume":"2009","author":"Kokhlikyan Narine","year":"2020","unstructured":"Narine Kokhlikyan, Vivek Miglani, M. Mart\u00edn, E. Wang, B. Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. 2020. Captum: A unified and generic model interpretability library for PyTorch. ArXiv abs\/2009.07896 (2020).","journal-title":"ArXiv"},{"key":"e_1_3_3_50_2","article-title":"The disagreement problem in explainable machine learning: A practitioner\u2019s perspective","author":"Krishna Satyapriya","year":"2022","unstructured":"Satyapriya Krishna, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Steven Wu, and Himabindu Lakkaraju. 2022. The disagreement problem in explainable machine learning: A practitioner\u2019s perspective. arXiv preprint arXiv:2202.01602 (2022).","journal-title":"arXiv preprint arXiv:2202.01602"},{"key":"e_1_3_3_51_2","first-page":"1097","article-title":"ImageNet classification with deep convolutional neural networks","volume":"25","author":"Krizhevsky Alex","year":"2012","unstructured":"Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25 (2012), 1097\u20131105.","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_3_52_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41467-019-08987-4"},{"key":"e_1_3_3_53_2","unstructured":"Jeff Larson Surya Mattu Lauren Kirchner and Julia Angwin. 2016. Data and analysis for \u201cHow we analyzed the COMPAS recidivism algorithm.\u201d Retrieved from https:\/\/github.com\/propublica\/compas-analysis"},{"key":"e_1_3_3_54_2","unstructured":"Scott M. Lundberg Gabriel G. Erion and Su-In Lee. 2019. Consistent Individualized Feature Attribution for Tree Ensembles. arxiv:1802.03888 [cs.LG]"},{"key":"e_1_3_3_55_2","first-page":"4765","volume-title":"Proceedings of the Conference on Advances in Neural Information Processing Systems","author":"Lundberg Scott M.","year":"2017","unstructured":"Scott M. Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proceedings of the Conference on Advances in Neural Information Processing Systems. 4765\u20134774."},{"key":"e_1_3_3_56_2","article-title":"Physically-consistent generative adversarial networks for coastal flood visualization","author":"L\u00fctjens Bj\u00f6rn","year":"2021","unstructured":"Bj\u00f6rn L\u00fctjens, Brandon Leshchinskiy, Christian Requena-Mesa, Farrukh Chishtie, Natalia D\u00edaz-Rodr\u00edguez, Oc\u00e9ane Boulais, Aruna Sankaranarayanan, Aaron Pi\u00f1a, Yarin Gal, Chedy Ra\u00efssi, Alexander Lavin, and Dava Newman. 2021. Physically-consistent generative adversarial networks for coastal flood visualization. arXiv preprint arXiv:2104.04785 (2021).","journal-title":"arXiv preprint arXiv:2104.04785"},{"key":"e_1_3_3_57_2","volume-title":"Information Theory, Inference and Learning Algorithms","author":"MacKay David J. C.","year":"2003","unstructured":"David J. C. MacKay. 2003. Information Theory, Inference and Learning Algorithms. Cambridge University Press."},{"key":"e_1_3_3_58_2","article-title":"GCExplainer: Human-in-the-loop concept-based explanations for graph neural networks","author":"Magister Lucie Charlotte","year":"2021","unstructured":"Lucie Charlotte Magister, Dmitry Kazhdan, Vikash Singh, and Pietro Li\u00f2. 2021. GCExplainer: Human-in-the-loop concept-based explanations for graph neural networks. arXiv preprint arXiv:2107.11889 (2021).","journal-title":"arXiv preprint arXiv:2107.11889"},{"key":"e_1_3_3_59_2","article-title":"UMAP: Uniform manifold approximation and projection for dimension reduction","author":"McInnes Leland","year":"2018","unstructured":"Leland McInnes, John Healy, and James Melville. 2018. UMAP: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018).","journal-title":"arXiv preprint arXiv:1802.03426"},{"key":"e_1_3_3_60_2","doi-asserted-by":"publisher","unstructured":"Carlo Metta Andrea Beretta Riccardo Guidotti Yuan Yin Patrick Gallinari Salvatore Rinzivillo and Fosca Giannotti. 2021. Explainable deep image classifiers for skin lesion diagnosis. DOI:10.48550\/ARXIV.2111.11863","DOI":"10.48550\/ARXIV.2111.11863"},{"key":"e_1_3_3_61_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10462-021-10088-y"},{"key":"e_1_3_3_62_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2016.11.008"},{"key":"e_1_3_3_63_2","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372850"},{"key":"e_1_3_3_64_2","article-title":"COVID-Twitter-BERT: A natural language processing model to analyse COVID-19 content on Twitter","author":"M\u00fcller Martin","year":"2020","unstructured":"Martin M\u00fcller, Marcel Salath\u00e9, and Per E. Kummervold. 2020. COVID-Twitter-BERT: A natural language processing model to analyse COVID-19 content on Twitter. arXiv preprint arXiv:2005.07503 (2020).","journal-title":"arXiv preprint arXiv:2005.07503"},{"key":"e_1_3_3_65_2","doi-asserted-by":"crossref","first-page":"104","DOI":"10.1016\/j.jbi.2015.03.005","article-title":"Tailored motivational message generation: A model and practical framework for real-time physical activity coaching","volume":"55","author":"Akker H. op den","year":"2015","unstructured":"H. op den Akker, M. Cabrita, R. op den Akker, V. M. Jones, and H. J. Hermens. 2015. Tailored motivational message generation: A model and practical framework for real-time physical activity coaching. J. Biomed. Inform. 55 (2015), 104\u2013115.","journal-title":"J. Biomed. Inform."},{"key":"e_1_3_3_66_2","doi-asserted-by":"publisher","unstructured":"Urja Pawar Donna O\u2019Shea Susan Rea and Ruairi O\u2019Reilly. 2020. Explainable AI in healthcare. DOI:10.1109\/CyberSA49311.2020.9139655","DOI":"10.1109\/CyberSA49311.2020.9139655"},{"key":"e_1_3_3_67_2","doi-asserted-by":"publisher","DOI":"10.1080\/14786440109462720"},{"key":"e_1_3_3_68_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N18-1202"},{"key":"e_1_3_3_69_2","article-title":"Transformers interpret version 0.5.2","author":"Pierse Charles","year":"2021","unstructured":"Charles Pierse. 2021. Transformers interpret version 0.5.2. Github Repository (2021). Retrieved from https:\/\/github.com\/cdpierse\/transformers-interpret","journal-title":"Github Repository"},{"issue":"3","key":"e_1_3_3_70_2","doi-asserted-by":"crossref","first-page":"103273","DOI":"10.1016\/j.ipm.2023.103273","article-title":"Responsible and human centric AI-based insurance advisors","volume":"60","author":"Pisoni Galena","year":"2023","unstructured":"Galena Pisoni and Natalia D\u00edaz-Rodr\u00edguez. 2023. Responsible and human centric AI-based insurance advisors. Inf. Process. Manag. 60, 3 (2023), 103273.","journal-title":"Inf. Process. Manag."},{"key":"e_1_3_3_71_2","doi-asserted-by":"publisher","DOI":"10.3390\/app11020870"},{"key":"e_1_3_3_72_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.01103"},{"key":"e_1_3_3_73_2","unstructured":"Alun Preece Dan Harborne Dave Braines Richard Tomsett and Supriyo Chakraborty. 2018. Stakeholders in Explainable AI. arXiv:1810.00184"},{"key":"e_1_3_3_74_2","doi-asserted-by":"crossref","unstructured":"Ayoub El Qadi Maria Trocan Natalia Diaz-Rodriguez and Thomas Frossard. 2023. Feature contribution alignment with expert knowledge for artificial intelligence credit scoring. Signal Image and Video Processing 17 2 (2023) 427\u2013434.","DOI":"10.1007\/s11760-022-02239-7"},{"key":"e_1_3_3_75_2","unstructured":"Alec Radford Jeff Wu R. Child David Luan Dario Amodei and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog 8 (2019) 9."},{"key":"e_1_3_3_76_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cogsys.2024.101243"},{"key":"e_1_3_3_77_2","doi-asserted-by":"publisher","DOI":"10.1613\/jair.1.15348"},{"key":"e_1_3_3_78_2","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_3_79_2","doi-asserted-by":"publisher","DOI":"10.1016\/0377-0427(87)90125-7"},{"key":"e_1_3_3_80_2","article-title":"Interpreting the predictions of complex ML models by layer-wise relevance propagation","author":"Samek Wojciech","year":"2016","unstructured":"Wojciech Samek, Gr\u00e9goire Montavon, Alexander Binder, Sebastian Lapuschkin, and Klaus-Robert M\u00fcller. 2016. Interpreting the predictions of complex ML models by layer-wise relevance propagation. arXiv preprint arXiv:1611.08191 (2016).","journal-title":"arXiv preprint arXiv:1611.08191"},{"key":"e_1_3_3_81_2","doi-asserted-by":"publisher","DOI":"10.3390\/make4040047"},{"key":"e_1_3_3_82_2","doi-asserted-by":"crossref","first-page":"499","DOI":"10.1007\/978-3-030-57321-8_28","volume-title":"Proceedings of the International Cross-domain Conference for Machine Learning and Knowledge Extraction","author":"Saranti Anna","year":"2020","unstructured":"Anna Saranti, Behnam Taraghi, Martin Ebner, and Andreas Holzinger. 2020. Property-based testing for parameter learning of probabilistic graphical models. In Proceedings of the International Cross-domain Conference for Machine Learning and Knowledge Extraction. Springer, 499\u2013515."},{"key":"e_1_3_3_83_2","article-title":"Higher-order explanations of graph neural networks via relevant walks","author":"Schnake T.","year":"2020","unstructured":"T. Schnake, O. Eberle, J. Lederer, S. Nakajima, K. T. Sch\u00fctt, K. R. M\u00fcller, and G. Montavon. 2020. Higher-order explanations of graph neural networks via relevant walks. arXiv: 2006.03589 (2020).","journal-title":"arXiv: 2006.03589"},{"key":"e_1_3_3_84_2","first-page":"1","article-title":"A comprehensive taxonomy for explainable artificial intelligence: A systematic survey of surveys on methods and concepts","author":"Schwalbe Gesina","year":"2023","unstructured":"Gesina Schwalbe and Bettina Finzel. 2023. A comprehensive taxonomy for explainable artificial intelligence: A systematic survey of surveys on methods and concepts. Data Min. Knowl. Discov. (2023), 1\u201359.","journal-title":"Data Min. Knowl. Discov."},{"key":"e_1_3_3_85_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.74"},{"key":"e_1_3_3_86_2","doi-asserted-by":"publisher","unstructured":"Ramprasaath R. Selvaraju Abhishek Das Ramakrishna Vedantam Michael Cogswell Devi Parikh and Dhruv Batra. 2020. Grad-CAM: Visual explanations from deep networks via gradient-based localization. Int J Comput Vis 128 (2020) 336\u2013359. 10.1007\/s11263-019-01228-7","DOI":"10.1007\/s11263-019-01228-7"},{"key":"e_1_3_3_87_2","first-page":"370","volume-title":"Neuro-Symbolic Artificial Intelligence: The State of the Art","author":"Serafini Luciano","year":"2021","unstructured":"Luciano Serafini, Artur d\u2019Avila Garcez, Samy Badreddine, Ivan Donadello, Michael Spranger, and Federico Bianchi. 2021. Logic tensor networks: Theory and applications. In Neuro-Symbolic Artificial Intelligence: The State of the Art. IOS Press, 370\u2013394."},{"key":"e_1_3_3_88_2","article-title":"Logic tensor networks: Deep learning and logical reasoning from data and knowledge","author":"Serafini Luciano","year":"2016","unstructured":"Luciano Serafini and Artur d\u2019Avila Garcez. 2016. Logic tensor networks: Deep learning and logical reasoning from data and knowledge. arXiv preprint arXiv:1606.04422 (2016).","journal-title":"arXiv preprint arXiv:1606.04422"},{"key":"e_1_3_3_89_2","doi-asserted-by":"publisher","DOI":"10.1515\/9781400881970-018"},{"issue":"1","key":"e_1_3_3_90_2","article-title":"Conceptualising fairness: Three pillars for medical algorithms and health equity","volume":"29","author":"Sikstrom Laura","year":"2022","unstructured":"Laura Sikstrom, Marta M. Maslej, Katrina Hui, Zoe Findlay, Daniel Z. Buchman, and Sean L. Hill. 2022. Conceptualising fairness: Three pillars for medical algorithms and health equity. BMJ Health Care Inform. 29, 1 (2022).","journal-title":"BMJ Health Care Inform."},{"key":"e_1_3_3_91_2","doi-asserted-by":"publisher","DOI":"10.5555\/3305890.3306024"},{"key":"e_1_3_3_92_2","first-page":"415","article-title":"\u201cCloze Procedure\u201d: A new tool for measuring readability","volume":"30","author":"Taylor W. L.","year":"1953","unstructured":"W. L. Taylor. 1953. \u201cCloze Procedure\u201d: A new tool for measuring readability. Journal. Mass Commun. Quart. 30 (1953), 415\u2013433.","journal-title":"Journal. Mass Commun. Quart."},{"key":"e_1_3_3_93_2","doi-asserted-by":"publisher","DOI":"10.3233\/FAIA230148"},{"key":"e_1_3_3_94_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-94-010-0575-3"},{"issue":"11","key":"e_1_3_3_95_2","article-title":"Visualizing data using t-SNE.","volume":"9","author":"Maaten Laurens Van der","year":"2008","unstructured":"Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 11 (2008).","journal-title":"J. Mach. Learn. Res."},{"key":"e_1_3_3_96_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnhum.2015.00420"},{"key":"e_1_3_3_97_2","article-title":"Attention is all you need","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762 (2017).","journal-title":"arXiv preprint arXiv:1706.03762"},{"key":"e_1_3_3_98_2","article-title":"Saliency is a possible red herring when diagnosing poor generalization","author":"Viviano Joseph D.","year":"2019","unstructured":"Joseph D. Viviano, Becks Simpson, Francis Dutil, Yoshua Bengio, and Joseph Paul Cohen. 2019. Saliency is a possible red herring when diagnosing poor generalization. In Proceedings of the International Conference on Learning Representations (ICLR\u201919).","journal-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201919)"},{"key":"e_1_3_3_99_2","first-page":"12225","article-title":"PGM-Explainer: Probabilistic graphical model explanations for graph neural networks","volume":"33","author":"Vu Minh","year":"2020","unstructured":"Minh Vu and My T. Thai. 2020. PGM-Explainer: Probabilistic graphical model explanations for graph neural networks. Adv. Neural Inf. Process. Syst. 33 (2020), 12225\u201312235.","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_3_100_2","volume-title":"NeurIPS, Workshop on Human and Machine Decisions","author":"Wagner Benedikt","year":"2021","unstructured":"Benedikt Wagner and Artur d\u2019Avila Garcez. 2021. Neural-symbolic integration for interactive learning and conceptual grounding. In NeurIPS, Workshop on Human and Machine Decisions. Retrieved from https:\/\/arxiv.org\/abs\/2112.11805"},{"key":"e_1_3_3_101_2","article-title":"A neurosymbolic approach to AI alignment","author":"Wagner Benedikt","year":"2024","unstructured":"Benedikt Wagner and Artur d\u2019Avila Garcez. 2024. A neurosymbolic approach to AI alignment. Neurosymbolic AI Retrieved from https:\/\/neurosymbolic-ai-journal.com\/system\/files\/nai-paper-729.pdf","journal-title":"Neurosymbolic AI"},{"key":"e_1_3_3_102_2","volume-title":"Proceedings of the AAAI Spring Symposium (AAAI-MAKE\u201921)","author":"Wagner Benedikt","year":"2021","unstructured":"Benedikt Wagner and Artur S. D\u2019Avila Garcez. 2021. Neural-symbolic integration for fairness in AI. In Proceedings of the AAAI Spring Symposium (AAAI-MAKE\u201921)."},{"key":"e_1_3_3_103_2","article-title":"Beyond explaining: Opportunities and challenges of XAI-based model improvement","author":"Weber Leander","year":"2023","unstructured":"Leander Weber, Sebastian Lapuschkin, Alexander Binder, and Wojciech Samek. 2023. Beyond explaining: Opportunities and challenges of XAI-based model improvement. Inf. Fusion 92 (2023), 154\u2013176.","journal-title":"Inf. Fusion"},{"key":"e_1_3_3_104_2","unstructured":"Lilian Weng. 2018. Attention? Attention! Retrieved from http:\/\/lilianweng.github.io\/lil-log\/2018\/06\/24\/attention-attention.html"},{"key":"e_1_3_3_105_2","volume-title":"Proceedings of the 24th European Conference on Artificial Intelligence","author":"White Adam","year":"2020","unstructured":"Adam White and Artur d\u2019Avila Garcez. 2020. Measurable counterfactual local explanations for any classifier. In Proceedings of the 24th European Conference on Artificial Intelligence. Retrieved from http:\/\/arxiv.org\/abs\/1908.03020"},{"key":"e_1_3_3_106_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N18-1101"},{"key":"e_1_3_3_107_2","first-page":"20554","article-title":"On completeness-aware concept-based explanations in deep neural networks","volume":"33","author":"Yeh Chih-Kuan","year":"2020","unstructured":"Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Tomas Pfister, and Pradeep Ravikumar. 2020. On completeness-aware concept-based explanations in deep neural networks. Adv. Neural Inf. Process. Syst. 33 (2020), 20554\u201320565.","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_3_108_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2021.107899"},{"key":"e_1_3_3_109_2","article-title":"GNNExplainer: Generating explanations for graph neural networks","volume":"32","author":"Ying Zhitao","year":"2019","unstructured":"Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. 2019. GNNExplainer: Generating explanations for graph neural networks. Adv. Neural Inf. Process. Syst. 32 (2019).","journal-title":"Adv. Neural Inf. Process. Syst."}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3670685","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3670685","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:58:20Z","timestamp":1750294700000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3670685"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,11,7]]},"references-count":108,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2025,2,28]]}},"alternative-id":["10.1145\/3670685"],"URL":"https:\/\/doi.org\/10.1145\/3670685","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,11,7]]},"assertion":[{"value":"2023-03-08","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-05-08","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-11-07","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}