{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,29]],"date-time":"2026-04-29T00:37:06Z","timestamp":1777423026847,"version":"3.51.4"},"reference-count":177,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2023,6,3]],"date-time":"2023-06-03T00:00:00Z","timestamp":1685750400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,6,3]],"date-time":"2023-06-03T00:00:00Z","timestamp":1685750400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100010663","name":"H2020 European Research Council","doi-asserted-by":"publisher","award":["834756"],"award-info":[{"award-number":["834756"]}],"id":[{"id":"10.13039\/100010663","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100010669","name":"H2020 LEIT Information and Communication Technologies","doi-asserted-by":"publisher","award":["952026"],"award-info":[{"award-number":["952026"]}],"id":[{"id":"10.13039\/100010669","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100010669","name":"H2020 LEIT Information and Communication Technologies","doi-asserted-by":"publisher","award":["952215"],"award-info":[{"award-number":["952215"]}],"id":[{"id":"10.13039\/100010669","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100010662","name":"H2020 Excellent Science","doi-asserted-by":"publisher","award":["871042"],"award-info":[{"award-number":["871042"]}],"id":[{"id":"10.13039\/100010662","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Data Min Knowl Disc"],"published-print":{"date-parts":[[2023,9]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The rise of sophisticated black-box machine learning models in Artificial Intelligence systems has prompted the need for explanation methods that reveal how these models work in an understandable way to users and decision makers. Unsurprisingly, the state-of-the-art exhibits currently a plethora of explainers providing many different types of explanations. With the aim of providing a compass for researchers and practitioners, this paper proposes a categorization of explanation methods from the perspective of the type of explanation they return, also considering the different input data formats. The paper accounts for the most representative explainers to date, also discussing similarities and discrepancies of returned explanations through their visual appearance. A companion website to the paper is provided as a continuous update to new explainers as they appear. Moreover, a subset of the most robust and widely adopted explainers, are benchmarked with respect to a repertoire of quantitative metrics.<\/jats:p>","DOI":"10.1007\/s10618-023-00933-9","type":"journal-article","created":{"date-parts":[[2023,6,3]],"date-time":"2023-06-03T11:01:35Z","timestamp":1685790095000},"page":"1719-1778","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":180,"title":["Benchmarking and survey of explanation methods for black box models"],"prefix":"10.1007","volume":"37","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9583-3897","authenticated-orcid":false,"given":"Francesco","family":"Bodria","sequence":"first","affiliation":[]},{"given":"Fosca","family":"Giannotti","sequence":"additional","affiliation":[]},{"given":"Riccardo","family":"Guidotti","sequence":"additional","affiliation":[]},{"given":"Francesca","family":"Naretto","sequence":"additional","affiliation":[]},{"given":"Dino","family":"Pedreschi","sequence":"additional","affiliation":[]},{"given":"Salvatore","family":"Rinzivillo","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,6,3]]},"reference":[{"key":"933_CR1","doi-asserted-by":"crossref","unstructured":"Abujabal A, Roy RS, Yahya M, et\u00a0al (2017) QUINT: interpretable question answering over knowledge bases. In: Proceedings of the 2017 conference on empirical methods in natural language processing, EMNLP 2017, Copenhagen, Denmark\u2014system demonstrations","DOI":"10.18653\/v1\/D17-2011"},{"key":"933_CR2","doi-asserted-by":"crossref","unstructured":"Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access","DOI":"10.1109\/ACCESS.2018.2870052"},{"key":"933_CR3","unstructured":"Adebayo J, Gilmer J, Muelly M, et\u00a0al (2018) Sanity checks for saliency maps. In: Advances in neural information processing systems 31: annual conference on neural information processing systems 2018, NeurIPS 2018, Montr\u00e9al, Canada"},{"key":"933_CR4","unstructured":"Adebayo J, Muelly M, Liccardi I, et\u00a0al (2020) Debugging tests for model explanations. In: Advances in neural information processing systems 33: annual conference on neural information processing systems 2020, NeurIPS 2020, virtual"},{"key":"933_CR5","unstructured":"Agarwal R, Melnick L, Frosst N, et\u00a0al (2021) Neural additive models: Interpretable machine learning with neural nets. In: Advances in neural information processing systems 34: annual conference on neural information processing systems 2021, NeurIPS 2021, virtual"},{"key":"933_CR6","doi-asserted-by":"crossref","unstructured":"Aggarwal CC, Zhai C (2012) A survey of text classification algorithms. In: Mining text data. Springer, pp 163\u2013222","DOI":"10.1007\/978-1-4614-3223-4_6"},{"key":"933_CR7","doi-asserted-by":"crossref","unstructured":"Albini E, Rago A, Baroni P, et\u00a0al (2020) Relation-based counterfactual explanations for bayesian network classifiers. In: Proceedings of the twenty-ninth international joint conference on artificial intelligence, IJCAI 2020","DOI":"10.24963\/ijcai.2020\/63"},{"key":"933_CR8","unstructured":"Alvarez-Melis D, Jaakkola TS (2018) Towards robust interpretability with self-explaining neural networks. In: Advances in neural information processing systems 31: annual conference on neural information processing systems 2018, NeurIPS 2018, Montr\u00e9al, Canada"},{"key":"933_CR9","unstructured":"Anjomshoae S, Najjar A, Calvaresi D, et\u00a0al (2019) Explainable agents and robots: Results from a systematic literature review. In: Proceedings of the 18th international conference on autonomous agents and multiagent systems, AAMAS \u201919, Montreal, QC, Canada"},{"key":"933_CR10","unstructured":"Anjomshoae S, Kampik T, Fr\u00e4mling K (2020) Py-ciu: a python library for explaining machine learning predictions using contextual importance and utility. In: IJCAI-PRICAI 2020 workshop on explainable artificial intelligence (XAI)"},{"key":"933_CR11","unstructured":"Apley DW, Zhu J (2016) Visualizing the effects of predictor variables in black box supervised learning models. arXiv preprint arXiv:1612.08468"},{"key":"933_CR12","doi-asserted-by":"crossref","unstructured":"Arras L, Montavon G, M\u00fcller K, et\u00a0al (2017) Explaining recurrent neural network predictions in sentiment analysis. In: Proceedings of the 8th workshop on computational approaches to subjectivity, sentiment and social media analysis, WASSA@EMNLP 2017, Copenhagen, Denmark","DOI":"10.18653\/v1\/W17-5221"},{"key":"933_CR13","unstructured":"Arrieta AB, Rodr\u00edguez ND, Ser JD, et\u00a0al (2020) Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fus"},{"key":"933_CR14","unstructured":"Artelt A, Hammer B (2019) On the computation of counterfactual explanations\u2014a survey. arXiv preprint arXiv:1911.07749"},{"key":"933_CR15","unstructured":"Arya V, Bellamy RKE, Chen P, et\u00a0al (2019) One explanation does not fit all: A toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012"},{"issue":"7","key":"933_CR16","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0130140","volume":"10","author":"S Bach","year":"2015","unstructured":"Bach S, Binder A, Montavon G et al (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One 10(7):e0130140","journal-title":"PLoS One"},{"key":"933_CR17","unstructured":"Bahdanau D, Cho K, Bengio Y (2015) Neural machine translation by jointly learning to align and translate. In: 3rd International conference on learning representations, ICLR 2015, San Diego, CA, USA, conference track proceedings"},{"key":"933_CR18","doi-asserted-by":"crossref","unstructured":"Bien J, Tibshirani R (2011) Prototype selection for interpretable classification. Ann Appl Stat 2403\u20132424","DOI":"10.1214\/11-AOAS495"},{"key":"933_CR19","doi-asserted-by":"crossref","unstructured":"Blanco-Justicia A, Domingo-Ferrer J, Mart\u00ednez S, et\u00a0al (2020) Machine learning explainability via microaggregation and shallow decision trees. Knowl Based Syst","DOI":"10.1016\/j.knosys.2020.105532"},{"key":"933_CR20","doi-asserted-by":"crossref","unstructured":"Boz O (2002) Extracting decision trees from trained neural networks. In: Proceedings of the eighth ACM SIGKDD international conference on knowledge discovery and data mining, Edmonton, Alberta, Canada","DOI":"10.1145\/775047.775113"},{"issue":"1","key":"933_CR21","first-page":"4","volume":"3","author":"S Bramhall","year":"2020","unstructured":"Bramhall S, Horn H, Tieu M et al (2020) Qlime-a quadratic local interpretable model-agnostic explanation approach. SMU Data Sci Rev 3(1):4","journal-title":"SMU Data Sci Rev"},{"key":"933_CR22","doi-asserted-by":"crossref","unstructured":"Byrne RM (2019) Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: IJCAI, pp 6276\u20136282","DOI":"10.24963\/ijcai.2019\/876"},{"issue":"4","key":"933_CR23","doi-asserted-by":"publisher","first-page":"760","DOI":"10.1037\/xlm0000756","volume":"46","author":"RM Byrne","year":"2020","unstructured":"Byrne RM, Johnson-Laird P (2020) If and or: real and counterfactual possibilities in their truth and probability. J Exp Psychol Learn Mem Cogn 46(4):760","journal-title":"J Exp Psychol Learn Mem Cogn"},{"key":"933_CR24","unstructured":"Cai L, Ji S (2020) A multi-scale approach for graph link prediction. In: The thirty-fourth AAAI conference on artificial intelligence, AAAI 2020, the thirty-second innovative applications of artificial intelligence conference, IAAI 2020, the tenth AAAI symposium on educational advances in artificial intelligence, EAAI 2020, New York, NY, USA"},{"key":"933_CR25","doi-asserted-by":"crossref","unstructured":"Calamoneri T (2006) The L(h, k)-labelling problem: a survey and annotated bibliography. Comput J","DOI":"10.1093\/comjnl\/bxl018"},{"issue":"8","key":"933_CR26","doi-asserted-by":"publisher","first-page":"832","DOI":"10.3390\/electronics8080832","volume":"8","author":"DV Carvalho","year":"2019","unstructured":"Carvalho DV, Pereira EM, Cardoso JS (2019) Machine learning interpretability: a survey on methods and metrics. Electronics 8(8):832","journal-title":"Electronics"},{"key":"933_CR27","doi-asserted-by":"crossref","unstructured":"Chattopadhay A, Sarkar A, Howlader P, et\u00a0al (2018) Grad-cam++: generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE winter conference on applications of computer vision (WACV), IEEE","DOI":"10.1109\/WACV.2018.00097"},{"key":"933_CR28","doi-asserted-by":"crossref","unstructured":"Chemmengath SA, Azad AP, Luss R, et\u00a0al (2022) Let the CAT out of the bag: Contrastive attributed explanations for text. In: Proceedings of the 2022 conference on empirical methods in natural language processing, EMNLP 2022, Abu Dhabi, United Arab Emirates","DOI":"10.18653\/v1\/2022.emnlp-main.484"},{"key":"933_CR29","unstructured":"Chen J, Song L, Wainwright MJ, et\u00a0al (2018) Learning to explain: an information-theoretic perspective on model interpretation. In: Proceedings of the 35th international conference on machine learning, ICML 2018, Stockholmsm\u00e4ssan, Stockholm, Sweden"},{"key":"933_CR30","unstructured":"Chen C, Li O, Tao D, et\u00a0al (2019) This looks like that: deep learning for interpretable image recognition. In: Advances in neural information processing systems 32: annual conference on neural information processing systems 2019, NeurIPS 2019, Vancouver, BC, Canada"},{"key":"933_CR31","doi-asserted-by":"crossref","unstructured":"Cheng J, Dong L, Lapata M (2016) Long short-term memory-networks for machine reading. In: Proceedings of the 2016 conference on empirical methods in natural language processing, EMNLP 2016, Austin, Texas, USA","DOI":"10.18653\/v1\/D16-1053"},{"key":"933_CR32","unstructured":"Chipman H, George E, McCulloh R (1998) Making sense of a forest of trees. Comput Sci Stat"},{"key":"933_CR33","doi-asserted-by":"crossref","unstructured":"Chouldechova A (2017) Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data","DOI":"10.1089\/big.2016.0047"},{"key":"933_CR34","doi-asserted-by":"crossref","unstructured":"Chowdhary K (2020) Natural language processing. In: Fundamentals of artificial intelligence. Springer, pp 603\u2013649","DOI":"10.1007\/978-81-322-3972-7_19"},{"key":"933_CR35","doi-asserted-by":"crossref","unstructured":"Chowdhury T, Rahimi R, Allan J (2022) Equi-explanation maps: concise and informative global summary explanations. In: 2022 ACM conference on fairness, accountability, and transparency, FAccT \u201922","DOI":"10.1145\/3531146.3533112"},{"key":"933_CR36","doi-asserted-by":"crossref","unstructured":"Cover TM, Hart PE (1967) Nearest neighbor pattern classification. IEEE Trans Inf Theory","DOI":"10.1109\/TIT.1967.1053964"},{"key":"933_CR37","unstructured":"Craven MW, Shavlik JW (1995) Extracting tree-structured representations of trained networks. In: Advances in neural information processing systems 8, NIPS, Denver, CO, USA"},{"key":"933_CR38","unstructured":"Danilevsky M, Qian K, Aharonov R, et\u00a0al (2020) A survey of the state of explainable AI for natural language processing. In: Proceedings of the 1st conference of the Asia-Pacific chapter of the association for computational linguistics and the 10th international joint conference on natural language processing, AACL\/IJCNLP 2020, Suzhou, China"},{"key":"933_CR39","doi-asserted-by":"crossref","unstructured":"Das A, Gupta C, Kovatchev V, et\u00a0al (2022) Prototex: explaining model decisions with prototype tensors. In: Proceedings of the 60th annual meeting of the association for computational linguistics (vol. 1: long papers), ACL 2022, Dublin, Ireland","DOI":"10.18653\/v1\/2022.acl-long.213"},{"key":"933_CR40","unstructured":"Dash S, G\u00fcnl\u00fck O, Wei D (2018) Boolean decision rules via column generation. In: Advances in neural information processing systems 31: annual conference on neural information processing systems 2018, NeurIPS 2018, Montr\u00e9al, Canada"},{"key":"933_CR41","doi-asserted-by":"crossref","unstructured":"Desai S, Ramaswamy HG (2020) Ablation-cam: visual explanations for deep convolutional network via gradient-free localization. In: IEEE winter conference on applications of computer vision, WACV 2020, Snowmass Village, CO, USA","DOI":"10.1109\/WACV45572.2020.9093360"},{"key":"933_CR42","unstructured":"Dhurandhar A, Chen P, Luss R, et\u00a0al (2018) Explanations based on the missing: towards contrastive explanations with pertinent negatives. In: Advances in neural information processing systems 31: annual conference on neural information processing systems 2018, NeurIPS 2018, Montr\u00e9al, Canada"},{"key":"933_CR43","unstructured":"Doersch C (2016) Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908"},{"issue":"1\u20134","key":"933_CR44","doi-asserted-by":"publisher","first-page":"187","DOI":"10.3233\/IDA-1998-2303","volume":"2","author":"PM Domingos","year":"1998","unstructured":"Domingos PM (1998) Knowledge discovery via multiple models. Intell Data Anal 2(1\u20134):187\u2013202","journal-title":"Intell Data Anal"},{"key":"933_CR45","doi-asserted-by":"crossref","unstructured":"Donnelly J, Barnett AJ, Chen C (2022) Deformable protopnet: an interpretable image classifier using deformable prototypes. In: CVPR. IEEE, pp 10255\u201310265","DOI":"10.1109\/CVPR52688.2022.01002"},{"key":"933_CR46","unstructured":"Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608"},{"key":"933_CR47","doi-asserted-by":"crossref","unstructured":"Do\u0161ilovi\u0107 FK, Br\u010di\u0107 M, Hlupi\u0107 N (2018) Explainable artificial intelligence: a survey. In: 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO), IEEE, pp 0210\u20130215","DOI":"10.23919\/MIPRO.2018.8400040"},{"key":"933_CR48","doi-asserted-by":"crossref","unstructured":"ElShawi R, Sherif Y, Al-Mallah M, et\u00a0al (2019) Ilime: local and global interpretable model-agnostic explainer of black-box decision. In: European conference on advances in databases and information systems. Springer, pp 53\u201368","DOI":"10.1007\/978-3-030-28730-6_4"},{"key":"933_CR49","unstructured":"Erion GG, Janizek JD, Sturmfels P, et\u00a0al (2019) Learning explainable models using attribution priors. arXiv preprint arXiv:1906.10670"},{"key":"933_CR50","doi-asserted-by":"crossref","unstructured":"Fong R, Patrick M, Vedaldi A (2019) Understanding deep networks via extremal perturbations and smooth masks. In: 2019 IEEE\/CVF international conference on computer vision, ICCV 2019, Seoul, Korea (South)","DOI":"10.1109\/ICCV.2019.00304"},{"issue":"1","key":"933_CR51","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/2594473.2594475","volume":"15","author":"AA Freitas","year":"2013","unstructured":"Freitas AA (2013) Comprehensible classification models: a position paper. SIGKDD Explor 15(1):1\u201310","journal-title":"SIGKDD Explor"},{"key":"933_CR52","doi-asserted-by":"publisher","first-page":"916","DOI":"10.1214\/07-AOAS148","volume":"2","author":"J Friedman","year":"2008","unstructured":"Friedman J, Popescu BE (2008) Predictive learning via rule ensembles. Ann Appl Stat 2:916\u2013954","journal-title":"Ann Appl Stat"},{"key":"933_CR53","doi-asserted-by":"crossref","unstructured":"Geler Z, Kurbalija V, Ivanovic M, et\u00a0al (2020) Weighted KNN and constrained elastic distances for time-series classification. Expert Syst Appl","DOI":"10.1016\/j.eswa.2020.113829"},{"key":"933_CR54","unstructured":"Ghorbani A, Wexler J, Zou JY, et\u00a0al (2019) Towards automatic concept-based explanations. In: Advances in neural information processing systems 32: annual conference on neural information processing systems 2019, NeurIPS 2019, Vancouver, BC, Canada"},{"key":"933_CR55","doi-asserted-by":"crossref","unstructured":"Gilpin LH, Bau D, Yuan BZ, et\u00a0al (2018) Explaining explanations: an overview of interpretability of machine learning. In: 5th IEEE international conference on data science and advanced analytics, DSAA 2018, Turin, Italy","DOI":"10.1109\/DSAA.2018.00018"},{"issue":"2","key":"933_CR56","doi-asserted-by":"publisher","first-page":"75","DOI":"10.1089\/big.2016.0007","volume":"4","author":"M Gleicher","year":"2016","unstructured":"Gleicher M (2016) A framework for considering comprehensibility in modeling. Big Data 4(2):75\u201388","journal-title":"Big Data"},{"key":"933_CR57","unstructured":"Goebel R, Chander A, Holzinger K, et\u00a0al (2018) Explainable AI: the new 42? In: Machine learning and knowledge extraction\u2014second IFIP TC 5, TC 8\/WG 8.4, 8.9, TC 12\/WG 12.9 international cross-domain conference, CD-MAKE 2018, Hamburg, Germany, Proceedings"},{"key":"933_CR58","unstructured":"Goyal Y, Shalit U, Kim B (2019) Explaining classifiers with causal concept effect (cace). arXiv preprint arXiv:1907.07165"},{"key":"933_CR59","doi-asserted-by":"crossref","unstructured":"Guidotti R (2021) Evaluating local explanation methods on ground truth. Artif Intell","DOI":"10.1016\/j.artint.2020.103428"},{"key":"933_CR60","doi-asserted-by":"crossref","unstructured":"Guidotti R (2022) Counterfactual explanations and how to find them: literature review and benchmarking. DAMI, pp 1\u201355","DOI":"10.1007\/s10618-022-00831-6"},{"key":"933_CR61","doi-asserted-by":"crossref","unstructured":"Guidotti R, Monreale A, Giannotti F, et\u00a0al (2019a) Factual and counterfactual explanations for black box decision making. IEEE Intell Syst","DOI":"10.1109\/MIS.2019.2957223"},{"key":"933_CR62","doi-asserted-by":"crossref","unstructured":"Guidotti R, Monreale A, Matwin S, et\u00a0al (2019b) Black box explanation by learning image exemplars in the latent feature space. In: Machine learning and knowledge discovery in databases\u2014European conference, ECML PKDD 2019, W\u00fcrzburg, Germany, proceedings, part I","DOI":"10.1007\/978-3-030-46150-8_12"},{"key":"933_CR63","doi-asserted-by":"crossref","unstructured":"Guidotti R, Monreale A, Ruggieri S, et\u00a0al (2019c) A survey of methods for explaining black box models. ACM Comput Surv","DOI":"10.1145\/3236009"},{"key":"933_CR64","doi-asserted-by":"crossref","unstructured":"Guidotti R, Monreale A, Matwin S, et\u00a0al (2020a) Explaining image classifiers generating exemplars and counter-exemplars from latent representations. In: The thirty-fourth AAAI conference on artificial intelligence, AAAI 2020, the thirty-second innovative applications of artificial intelligence conference, IAAI 2020, the tenth AAAI symposium on educational advances in artificial intelligence, EAAI 2020, New York, NY, USA","DOI":"10.1609\/aaai.v34i09.7116"},{"key":"933_CR65","doi-asserted-by":"crossref","unstructured":"Guidotti R, Monreale A, Spinnato F, et\u00a0al (2020b) Explaining any time series classifier. In: 2nd IEEE international conference on cognitive machine intelligence, CogMI 2020, Atlanta, GA, USA","DOI":"10.1109\/CogMI50398.2020.00029"},{"key":"933_CR66","doi-asserted-by":"crossref","unstructured":"Gurumoorthy KS, Dhurandhar A, Cecchi GA, et\u00a0al (2019) Efficient data representation by selecting prototypes with importance weights. In: 2019 IEEE international conference on data mining, ICDM 2019, Beijing, China","DOI":"10.1109\/ICDM.2019.00036"},{"key":"933_CR67","unstructured":"Hand DJ, Till RJ (2001) A simple generalisation of the area under the ROC curve for multiple class classification problems. Mach Learn"},{"key":"933_CR68","doi-asserted-by":"crossref","unstructured":"Hartmann Y, Liu H, Lahrberg S, et\u00a0al (2022) Interpretable high-level features for human activity recognition. In: Proceedings of the 15th international joint conference on biomedical engineering systems and technologies, BIOSTEC 2022, vol. 4: BIOSIGNALS, Online Streaming","DOI":"10.5220\/0010840500003123"},{"key":"933_CR69","doi-asserted-by":"crossref","unstructured":"Hase P, Bansal M (2020) Evaluating explainable AI: which algorithmic explanations help users predict model behavior? In: Proceedings of the 58th annual meeting of the association for computational linguistics, ACL 2020, Online","DOI":"10.18653\/v1\/2020.acl-main.491"},{"key":"933_CR70","volume-title":"Generalized additive models","author":"TJ Hastie","year":"1990","unstructured":"Hastie TJ, Tibshirani RJ (1990) Generalized additive models, vol 43. CRC Press"},{"key":"933_CR71","doi-asserted-by":"crossref","unstructured":"Hind M, Wei D, Campbell M, et\u00a0al (2019) TED: teaching AI to explain its decisions. In: Proceedings of the 2019 AAAI\/ACM conference on AI, ethics, and society, AIES 2019, Honolulu, HI, USA","DOI":"10.1145\/3306618.3314273"},{"key":"933_CR72","doi-asserted-by":"crossref","unstructured":"Hoover B, Strobelt H, Gehrmann S (2019) exbert: a visual analysis tool to explore learned representations in transformers models. arXiv preprint arXiv:1910.05276","DOI":"10.18653\/v1\/2020.acl-demos.22"},{"key":"933_CR73","unstructured":"Huang Q, Yamada M, Tian Y, et\u00a0al (2020) Graphlime: local interpretable model explanations for graph neural networks. arXiv preprint arXiv:2001.06216"},{"key":"933_CR74","unstructured":"Hvilsh\u00f8j F, Iosifidis A, Assent I (2021) ECINN: efficient counterfactuals from invertible neural networks. In: BMVC. BMVA Press, p\u00a043"},{"key":"933_CR75","unstructured":"Jain S, Wallace BC (2019) Attention is not explanation. In: Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, NAACL-HLT 2019, Minneapolis, MN, USA, vol. 1 (long and short papers)"},{"key":"933_CR76","unstructured":"Jeyakumar JV, Noor J, Cheng Y, et\u00a0al (2020) How can I explain this to you? An empirical study of deep neural network explanation methods. In: Advances in neural information processing systems 33: annual conference on neural information processing systems 2020, NeurIPS 2020, virtual"},{"key":"933_CR77","doi-asserted-by":"crossref","unstructured":"Kamakshi V, Gupta U, Krishnan NC (2021) PACE: posthoc architecture-agnostic concept extractor for explaining CNNs. In: International joint conference on neural networks, IJCNN 2021, Shenzhen, China","DOI":"10.1109\/IJCNN52387.2021.9534369"},{"key":"933_CR78","doi-asserted-by":"crossref","unstructured":"Kanamori K, Takagi T, Kobayashi K, et\u00a0al (2020) DACE: distribution-aware counterfactual explanation by mixed-integer linear optimization. In: Proceedings of the twenty-ninth international joint conference on artificial intelligence, IJCAI 2020","DOI":"10.24963\/ijcai.2020\/395"},{"key":"933_CR79","doi-asserted-by":"crossref","unstructured":"Kapishnikov A, Bolukbasi T, Vi\u00e9gas FB, et\u00a0al (2019) XRAI: better attributions through regions. In: 2019 IEEE\/CVF international conference on computer vision, ICCV 2019, Seoul, Korea (South)","DOI":"10.1109\/ICCV.2019.00505"},{"key":"933_CR80","unstructured":"Karimi A, Barthe G, Balle B, et\u00a0al (2020a) Model-agnostic counterfactual explanations for consequential decisions. In: The 23rd international conference on artificial intelligence and statistics, AISTATS 2020, Online [Palermo, Sicily, Italy]"},{"key":"933_CR81","unstructured":"Karimi A, Barthe G, Sch\u00f6lkopf B, et\u00a0al (2020b) A survey of algorithmic recourse: definitions, formulations, solutions, and prospects. arXiv preprint arXiv:2010.04050"},{"key":"933_CR82","doi-asserted-by":"crossref","unstructured":"Katehakis Jr MN, Veinott AF (1987) The multi-armed bandit problem: decomposition and computation. Math Oper Res","DOI":"10.1287\/moor.12.2.262"},{"key":"933_CR83","doi-asserted-by":"crossref","unstructured":"Kenny EM, Keane MT (2021) On generating plausible counterfactual and semi-factual explanations for deep learning. In: AAAI. AAAI Press, pp 11575\u201311585","DOI":"10.1609\/aaai.v35i13.17377"},{"key":"933_CR84","doi-asserted-by":"crossref","unstructured":"Kim B, Chacha CM, Shah JA (2015) Inferring team task plans from human meetings: a generative modeling approach with logic-based prior. J Artif Intell Res","DOI":"10.1613\/jair.4496"},{"key":"933_CR85","unstructured":"Kim B, Koyejo O, Khanna R (2016) Examples are not enough, learn to criticize! criticism for interpretability. In: Advances in neural information processing systems 29: annual conference on neural information processing systems 2016, Barcelona, Spain"},{"key":"933_CR86","unstructured":"Kim B, Wattenberg M, Gilmer J, et\u00a0al (2018) Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: Proceedings of the 35th international conference on machine learning, ICML 2018, Stockholmsm\u00e4ssan, Stockholm, Sweden"},{"key":"933_CR87","unstructured":"Kipf TN, Welling M (2017) Semi-supervised classification with graph convolutional networks. In: 5th International conference on learning representations, ICLR 2017, Toulon, France, conference track proceedings"},{"key":"933_CR88","unstructured":"Koh PW, Liang P (2017) Understanding black-box predictions via influence functions. In: Proceedings of the 34th international conference on machine learning, ICML 2017, Sydney, NSW, Australia"},{"key":"933_CR89","unstructured":"Kurenkov A (2020) Lessons from the pulse model and discussion. The gradient"},{"key":"933_CR90","doi-asserted-by":"crossref","unstructured":"Lakkaraju H, Bach SH, Leskovec J (2016) Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, San Francisco, CA, USA","DOI":"10.1145\/2939672.2939874"},{"key":"933_CR91","doi-asserted-by":"crossref","unstructured":"Lampridis O, Guidotti R, Ruggieri S (2020) Explaining sentiment classification with synthetic exemplars and counter-exemplars. In: Discovery science\u201423rd international conference, DS 2020, Thessaloniki, Greece, Proceedings","DOI":"10.1007\/978-3-030-61527-7_24"},{"key":"933_CR92","doi-asserted-by":"crossref","unstructured":"Lang O, Gandelsman Y, Yarom M, et\u00a0al (2021) Explaining in style: training a GAN to explain a classifier in stylespace. In: ICCV. IEEE, pp 673\u2013682","DOI":"10.1109\/ICCV48922.2021.00073"},{"key":"933_CR93","doi-asserted-by":"crossref","unstructured":"Lapuschkin S, W\u00e4ldchen S, Binder A, et\u00a0al (2019) Unmasking clever hans predictors and assessing what machines really learn. arXiv preprint arXiv:1902.10178","DOI":"10.1038\/s41467-019-08987-4"},{"key":"933_CR94","doi-asserted-by":"crossref","unstructured":"Lee Y, Wei C, Cheng T, et\u00a0al (2012) Nearest-neighbor-based approach to time-series classification. Decis Support Syst","DOI":"10.1016\/j.dss.2011.12.014"},{"key":"933_CR95","doi-asserted-by":"crossref","unstructured":"Letham B, Rudin C, McCormick TH, et\u00a0al (2015) Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model. arXiv preprint arXiv:1511.01644","DOI":"10.1214\/15-AOAS848"},{"key":"933_CR96","unstructured":"Ley D, Mishra S, Magazzeni D (2022) Global counterfactual explanations: investigations, implementations and improvements. In: ICLR 2022 workshop on PAIR$$\\wedge $$2Struct: privacy, accountability, interpretability, robustness, reasoning on structured data. https:\/\/openreview.net\/forum?id=Btbgp0dOWZ9"},{"key":"933_CR97","unstructured":"Li J, Monroe W, Jurafsky D (2016) Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220"},{"key":"933_CR98","unstructured":"Li O, Liu H, Chen C, et\u00a0al (2018) Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: Proceedings of the thirty-second AAAI conference on artificial intelligence, (AAAI-18), the 30th innovative applications of artificial intelligence (IAAI-18), and the 8th AAAI symposium on educational advances in artificial intelligence (EAAI-18), New Orleans, Louisiana, USA"},{"key":"933_CR99","doi-asserted-by":"crossref","unstructured":"Li H, Tian Y, Mueller K, et\u00a0al (2019) Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation. Image Vis Comput","DOI":"10.1016\/j.imavis.2019.02.005"},{"key":"933_CR100","unstructured":"Lipovetsky S (2022) Explanatory model analysis: Explore, explain and examine predictive models, by Przemyslaw Biecek, Tomasz Burzykowski, Boca Raton, FL, Chapman and Hall\/CRC, Taylor & Francis Group, 2021, xiii + 311 pp., \\$ 79.96 (hbk), ISBN 978-0-367-13559-1. Technometrics"},{"key":"933_CR101","unstructured":"Looveren AV, Klaise J (2021) Interpretable counterfactual explanations guided by prototypes. In: Machine learning and knowledge discovery in databases. Research track\u2014European conference, ECML PKDD 2021, Bilbao, Spain, proceedings, part II"},{"key":"933_CR102","doi-asserted-by":"crossref","unstructured":"Lucic A, Haned H, de\u00a0Rijke M (2020) Why does my model fail?: Contrastive local explanations for retail forecasting. In: FAT* \u201920: conference on fairness, accountability, and transparency, Barcelona, Spain","DOI":"10.1145\/3351095.3372824"},{"key":"933_CR103","unstructured":"Lucic A, ter Hoeve MA, Tolomei G, et\u00a0al (2022) Cf-gnnexplainer: counterfactual explanations for graph neural networks. In: International conference on artificial intelligence and statistics, AISTATS 2022, virtual event"},{"key":"933_CR104","unstructured":"Lundberg SM, Lee S (2017) A unified approach to interpreting model predictions. In: Advances in neural information processing systems 30: annual conference on neural information processing systems 2017, Long Beach, CA, USA"},{"key":"933_CR105","unstructured":"Luss R, Chen P, Dhurandhar A, et\u00a0al (2019) Generating contrastive explanations with monotonic attribute functions. arXiv preprint arXiv:1905.12698"},{"key":"933_CR106","doi-asserted-by":"crossref","unstructured":"Luss R, Chen P, Dhurandhar A, et\u00a0al (2021) Leveraging latent features for local explanations. In: KDD \u201921: the 27th ACM SIGKDD conference on knowledge discovery and data mining, virtual event, Singapore","DOI":"10.1145\/3447548.3467265"},{"key":"933_CR107","doi-asserted-by":"crossref","unstructured":"Madaan N, Padhi I, Panwar N, et\u00a0al (2021) Generate your counterfactuals: towards controlled counterfactual generation for text. In: Thirty-fifth AAAI conference on artificial intelligence, AAAI 2021, thirty-third conference on innovative applications of artificial intelligence, IAAI 2021, the eleventh symposium on educational advances in artificial intelligence, EAAI 2021, virtual event","DOI":"10.1609\/aaai.v35i15.17594"},{"key":"933_CR108","doi-asserted-by":"crossref","unstructured":"Martens D, Provost FJ (2014) Explaining data-driven document classifications. MIS Q","DOI":"10.25300\/MISQ\/2014\/38.1.04"},{"key":"933_CR109","doi-asserted-by":"crossref","unstructured":"Martens D, Baesens B, Gestel TV, et\u00a0al (2007) Comprehensible credit scoring models using rule extraction from support vector machines. Eur J Oper Res","DOI":"10.2139\/ssrn.878283"},{"key":"933_CR110","doi-asserted-by":"crossref","unstructured":"Miller T (2019) Explanation in artificial intelligence: insights from the social sciences. Artif Intell","DOI":"10.1016\/j.artint.2018.07.007"},{"key":"933_CR111","doi-asserted-by":"crossref","unstructured":"Ming Y, Qu H, Bertini E (2019) Rulematrix: visualizing and understanding classifiers with rules. IEEE Trans Vis Comput Graph","DOI":"10.1109\/TVCG.2018.2864812"},{"key":"933_CR112","doi-asserted-by":"crossref","unstructured":"Mollas I, Bassiliades N, Tsoumakas G (2019) Lionets: local interpretation of neural networks through penultimate layer decoding. In: Machine learning and knowledge discovery in databases\u2014international workshops of ECML PKDD 2019, W\u00fcrzburg, Germany, proceedings, part I","DOI":"10.1007\/978-3-030-43823-4_23"},{"key":"933_CR113","unstructured":"Molnar C (2022) Model-agnostic interpretable machine learning. PhD thesis, Ludwig Maximilian University of Munich, Germany"},{"key":"933_CR114","doi-asserted-by":"crossref","unstructured":"Mothilal RK, Sharma A, Tan C (2020) Explaining machine learning classifiers through diverse counterfactual explanations. In: FAT* \u201920: conference on fairness, accountability, and transparency, Barcelona, Spain","DOI":"10.1145\/3351095.3372850"},{"key":"933_CR115","doi-asserted-by":"crossref","unstructured":"Muhammad MB, Yeasin M (2020) Eigen-cam: Class activation map using principal components. In: 2020 International joint conference on neural networks, IJCNN 2020, Glasgow, UK","DOI":"10.1109\/IJCNN48605.2020.9206626"},{"issue":"44","key":"933_CR116","doi-asserted-by":"publisher","first-page":"22071","DOI":"10.1073\/pnas.1900654116","volume":"116","author":"WJ Murdoch","year":"2019","unstructured":"Murdoch WJ, Singh C, Kumbier K et al (2019) Definitions, methods, and applications in interpretable machine learning. Proc Natl Acad Sci 116(44):22071\u201322080","journal-title":"Proc Natl Acad Sci"},{"key":"933_CR117","doi-asserted-by":"crossref","unstructured":"Nauta M, van Bree R, Seifert C (2021) Neural prototype trees for interpretable fine-grained image recognition. In: CVPR. Computer vision foundation\/IEEE, pp 14933\u201314943","DOI":"10.1109\/CVPR46437.2021.01469"},{"key":"933_CR118","unstructured":"Nori H, Jenkins S, Koch P, et\u00a0al (2019) Interpretml: a unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223"},{"key":"933_CR119","doi-asserted-by":"crossref","unstructured":"Pan D, Li X, Zhu D (2021) Explaining deep neural network models with adversarial gradient integration. In: Proceedings of the thirtieth international joint conference on artificial intelligence, IJCAI 2021, virtual event\/Montreal, Canada","DOI":"10.24963\/ijcai.2021\/396"},{"key":"933_CR120","doi-asserted-by":"crossref","unstructured":"Panigutti C, Perotti A, Pedreschi D (2020) Doctor XAI: an ontology-based approach to black-box sequential data classification explanations. In: FAT* \u201920: conference on fairness, accountability, and transparency, Barcelona, Spain","DOI":"10.1145\/3351095.3372855"},{"key":"933_CR121","doi-asserted-by":"crossref","unstructured":"Panigutti C, Beretta A, Giannotti F, et\u00a0al (2022) Understanding the impact of explanations on advice-taking: a user study for ai-based clinical decision support systems. In: CHI \u201922: CHI conference on human factors in computing systems, New Orleans, LA, USA","DOI":"10.1145\/3491102.3502104"},{"key":"933_CR122","doi-asserted-by":"crossref","unstructured":"Pasquale F (2015) The black box society: the secret algorithms that control money and information. Harvard University Press","DOI":"10.4159\/harvard.9780674736061"},{"key":"933_CR123","doi-asserted-by":"crossref","unstructured":"Pawelczyk M, Broelemann K, Kasneci G (2020) Learning model-agnostic counterfactual explanations for tabular data. In: WWW \u201920: the web conference 2020, Taipei, Taiwan","DOI":"10.1145\/3366423.3380087"},{"key":"933_CR124","unstructured":"Peltola T (2018) Local interpretable model-agnostic explanations of Bayesian predictive models via Kullback\u2013Leibler projections. arXiv preprint arXiv:1810.02678"},{"key":"933_CR125","unstructured":"Petsiuk V, Das A, Saenko K (2018) RISE: randomized input sampling for explanation of black-box models. In: British machine vision conference 2018, BMVC 2018, Newcastle, UK"},{"key":"933_CR126","doi-asserted-by":"crossref","unstructured":"Pezeshkpour P, Tian Y, Singh S (2019) Investigating robustness and interpretability of link prediction via adversarial modifications. In: 1st Conference on automated knowledge base construction, AKBC 2019, Amherst, MA, USA","DOI":"10.18653\/v1\/N19-1337"},{"key":"933_CR127","unstructured":"Plumb G, Molitor D, Talwalkar A (2018) Model agnostic supervised local explanations. In: Advances in neural information processing systems 31: annual conference on neural information processing systems 2018, NeurIPS 2018, Montr\u00e9al, Canada"},{"key":"933_CR128","doi-asserted-by":"crossref","unstructured":"Poyiadzi R, Sokol K, Santos-Rodr\u00edguez R, et\u00a0al (2020) FACE: feasible and actionable counterfactual explanations. In: AIES \u201920: AAAI\/ACM conference on AI, ethics, and society, New York, NY, USA","DOI":"10.1145\/3375627.3375850"},{"key":"933_CR129","unstructured":"Prado-Romero MA, Prenkaj B, Stilo G, et\u00a0al (2022) A survey on graph counterfactual explanations: definitions, methods, evaluation. arXiv preprint arXiv:2210.12089"},{"key":"933_CR130","unstructured":"Puri I, Dhurandhar A, Pedapati T, et\u00a0al (2021) Cofrnets: interpretable neural architecture inspired by continued fractions. In: Advances in neural information processing systems 34: annual conference on neural information processing systems 2021, NeurIPS 2021, virtual"},{"key":"933_CR131","doi-asserted-by":"crossref","unstructured":"Rajani NF, McCann B, Xiong C, et\u00a0al (2019) Explain yourself! Leveraging language models for commonsense reasoning. In: Proceedings of the 57th conference of the association for computational linguistics, ACL 2019, Florence, Italy, vol 1: long papers","DOI":"10.18653\/v1\/P19-1487"},{"key":"933_CR132","unstructured":"Renard X, Woloszko N, Aigrain J, et\u00a0al (2019) Concept tree: high-level representation of variables for more interpretable surrogate decision trees. arXiv preprint arXiv:1906.01297"},{"key":"933_CR133","doi-asserted-by":"crossref","unstructured":"Ribeiro MT, Singh S, Guestrin C (2016) \u201cWhy should I trust you?\u201d: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, San Francisco, CA, USA","DOI":"10.1145\/2939672.2939778"},{"key":"933_CR134","doi-asserted-by":"crossref","unstructured":"Ribeiro MT, Singh S, Guestrin C (2018) Anchors: High-precision model-agnostic explanations. In: Proceedings of the thirty-second AAAI conference on artificial intelligence, (AAAI-18), the 30th innovative applications of artificial intelligence (IAAI-18), and the 8th AAAI symposium on educational advances in artificial intelligence (EAAI-18), New Orleans, Louisiana, USA","DOI":"10.1609\/aaai.v32i1.11491"},{"key":"933_CR135","doi-asserted-by":"crossref","unstructured":"Robnik-\u0160ikonja M, Kononenko I (2008) Explaining classifications for individual instances. IEEE Trans Knowl Data Eng 20(5)","DOI":"10.1109\/TKDE.2007.190734"},{"key":"933_CR136","unstructured":"Rojat T, Puget R, Filliat D, et\u00a0al (2021) Explainable artificial intelligence (XAI) on timeseries data: a survey. arXiv preprint arXiv:2104.00950"},{"key":"933_CR137","doi-asserted-by":"crossref","unstructured":"Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell","DOI":"10.1038\/s42256-019-0048-x"},{"key":"933_CR138","doi-asserted-by":"crossref","unstructured":"Samek W, Montavon G, Vedaldi A, et\u00a0al (eds) (2019) Explainable AI: interpreting, explaining and visualizing deep learning, lecture notes in computer science, vol 11700. Springer","DOI":"10.1007\/978-3-030-28954-6"},{"key":"933_CR139","unstructured":"Schwab P, Karlen W (2019) Cxplain: causal explanations for model interpretation under uncertainty. In: Advances in neural information processing systems 32: annual conference on neural information processing systems 2019, NeurIPS 2019, Vancouver, BC, Canada"},{"key":"933_CR140","doi-asserted-by":"crossref","unstructured":"Schwarzenberg R, H\u00fcbner M, Harbecke D, et\u00a0al (2019) Layerwise relevance visualization in convolutional text graph classifiers. In: Proceedings of the thirteenth workshop on graph-based methods for natural language processing, TextGraphs@EMNLP 2019, Hong Kong","DOI":"10.18653\/v1\/D19-5308"},{"key":"933_CR141","doi-asserted-by":"crossref","unstructured":"Selvaraju RR, Cogswell M, Das A, et\u00a0al (2020) Grad-cam: Visual explanations from deep networks via gradient-based localization. Int J Comput Vis","DOI":"10.1007\/s11263-019-01228-7"},{"key":"933_CR142","doi-asserted-by":"crossref","unstructured":"Setzu M, Guidotti R, Monreale A, et\u00a0al (2019) Global explanations with local scoring. In: Machine learning and knowledge discovery in databases\u2014international workshops of ECML PKDD 2019, W\u00fcrzburg, Germany, proceedings, part I","DOI":"10.1007\/978-3-030-43823-4_14"},{"key":"933_CR143","doi-asserted-by":"crossref","unstructured":"Setzu M, Guidotti R, Monreale A, et\u00a0al (2021) Glocalx\u2014from local to global explanations of black box AI models. Artif Intell","DOI":"10.1016\/j.artint.2021.103457"},{"key":"933_CR144","doi-asserted-by":"crossref","unstructured":"Shankaranarayana SM, Runje D (2019) ALIME: autoencoder based approach for local interpretability. In: Intelligent data engineering and automated learning\u2014IDEAL 2019\u201420th international conference, Manchester, UK, proceedings, part I","DOI":"10.1007\/978-3-030-33607-3_49"},{"key":"933_CR145","doi-asserted-by":"crossref","unstructured":"Shen W, Wei Z, Huang S, et\u00a0al (2021) Interpretable compositional convolutional neural networks. In: Proceedings of the thirtieth international joint conference on artificial intelligence, IJCAI 2021, virtual event\/Montreal, Canada","DOI":"10.24963\/ijcai.2021\/409"},{"key":"933_CR146","unstructured":"Shi S, Zhang X, Fan W (2020) A modified perturbed sampling method for local interpretable model-agnostic explanation. arXiv preprint arXiv:2002.07434"},{"key":"933_CR147","unstructured":"Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences. In: Proceedings of the 34th international conference on machine learning, ICML 2017, Sydney, NSW"},{"key":"933_CR148","unstructured":"Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: 3rd International conference on learning representations, ICLR 2015, San Diego, CA, USA, Conference track proceedings"},{"key":"933_CR149","unstructured":"Smilkov D, Thorat N, Kim B, et\u00a0al (2017) Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825"},{"key":"933_CR150","doi-asserted-by":"publisher","first-page":"333","DOI":"10.1016\/j.jbusres.2019.07.039","volume":"104","author":"H Snyder","year":"2019","unstructured":"Snyder H (2019) Literature review as a research methodology: an overview and guidelines. J Bus Res 104:333\u2013339. https:\/\/doi.org\/10.1016\/j.jbusres.2019.07.039","journal-title":"J Bus Res"},{"key":"933_CR151","doi-asserted-by":"crossref","unstructured":"Srivastava S, Labutov I, Mitchell TM (2017) Joint concept learning and semantic parsing from natural language explanations. In: Proceedings of the 2017 conference on empirical methods in natural language processing, EMNLP 2017, Copenhagen, Denmark","DOI":"10.18653\/v1\/D17-1161"},{"key":"933_CR152","doi-asserted-by":"crossref","unstructured":"Suissa-Peleg A, Haehn D, Knowles-Barley S, et\u00a0al (2016) Automatic neural reconstruction from petavoxel of electron microscopy data. Microsc Microanal","DOI":"10.1017\/S1431927616003536"},{"key":"933_CR153","unstructured":"Sundararajan M, Taly A, Yan Q (2017) Axiomatic attribution for deep networks. In: Proceedings of the 34th international conference on machine learning, ICML 2017, Sydney, NSW, Australia"},{"key":"933_CR154","doi-asserted-by":"crossref","unstructured":"Tan S, Soloviev M, Hooker G, et\u00a0al (2020) Tree space prototypes: another look at making tree ensembles interpretable. In: FODS \u201920: ACM-IMS foundations of data science conference, virtual event, USA","DOI":"10.1145\/3412815.3416893"},{"key":"933_CR155","doi-asserted-by":"crossref","unstructured":"Theissler A (2017) Detecting known and unknown faults in automotive systems using ensemble-based anomaly detection. Knowl Based Syst","DOI":"10.1016\/j.knosys.2017.02.023"},{"key":"933_CR156","doi-asserted-by":"crossref","unstructured":"Theissler A, Spinnato F, Schlegel U, et\u00a0al (2022) Explainable AI for time series classification: a review, taxonomy and research directions. IEEE Access","DOI":"10.1109\/ACCESS.2022.3207765"},{"key":"933_CR157","unstructured":"Tjoa E, Guan C (2019) A survey on explainable artificial intelligence (XAI): towards medical XAI. arXiv preprint arXiv:1907.07374"},{"key":"933_CR158","unstructured":"Vaswani A, Shazeer N, Parmar N, et\u00a0al (2017) Attention is all you need. In: Advances in neural information processing systems 30: annual conference on neural information processing systems 2017, Long Beach, CA, USA"},{"key":"933_CR159","unstructured":"Verma S, Dickerson JP, Hines K (2020) Counterfactual explanations for machine learning: a review. arXiv preprint arXiv:2010.10596"},{"issue":"2","key":"933_CR160","doi-asserted-by":"publisher","first-page":"315","DOI":"10.1007\/s10044-021-01055-y","volume":"25","author":"T Vermeire","year":"2022","unstructured":"Vermeire T, Brughmans D, Goethals S et al (2022) Explainable image classification with evidence counterfactual. Pattern Anal Appl 25(2):315\u2013335","journal-title":"Pattern Anal Appl"},{"key":"933_CR161","doi-asserted-by":"crossref","unstructured":"Wachter S, Mittelstadt BD, Russell C (2017) Counterfactual explanations without opening the black box: automated decisions and the GDPR. arXiv preprint arXiv:1711.00399","DOI":"10.2139\/ssrn.3063289"},{"key":"933_CR162","doi-asserted-by":"crossref","unstructured":"Wang H, Wang Z, Du M, et\u00a0al (2020) Score-cam: score-weighted visual explanations for convolutional neural networks. In: 2020 IEEE\/CVF conference on computer vision and pattern recognition, CVPR workshops 2020, Seattle, WA, USA","DOI":"10.1109\/CVPRW50498.2020.00020"},{"key":"933_CR163","doi-asserted-by":"crossref","unstructured":"Williams JJ, Kim J, Rafferty AN, et\u00a0al (2016) AXIS: generating explanations at scale with learnersourcing and machine learning. In: Proceedings of the third ACM conference on learning @ Scale, L@S 2016, Edinburgh, Scotland, UK","DOI":"10.1145\/2876034.2876042"},{"key":"933_CR164","unstructured":"Wu Z, Ong DC (2021) Context-guided BERT for targeted aspect-based sentiment analysis. In: Thirty-fifth AAAI conference on artificial intelligence, AAAI 2021, thirty-third conference on innovative applications of artificial intelligence, IAAI 2021, the eleventh symposium on educational advances in artificial intelligence, EAAI 2021, virtual event"},{"key":"933_CR165","doi-asserted-by":"crossref","unstructured":"Wu T, Ribeiro MT, Heer J, et\u00a0al (2021a) Polyjuice: generating counterfactuals for explaining, evaluating, and improving models. In: Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing, ACL\/IJCNLP 2021, (vol 1: long papers), virtual event","DOI":"10.18653\/v1\/2021.acl-long.523"},{"key":"933_CR166","doi-asserted-by":"crossref","unstructured":"Wu Z, Pan S, Chen F, et\u00a0al (2021b) A comprehensive survey on graph neural networks. IEEE Trans Neural Networks Learn Syst","DOI":"10.1109\/TNNLS.2020.2978386"},{"key":"933_CR167","unstructured":"Xu K, Ba J, Kiros R, et\u00a0al (2015) Show, attend and tell: neural image caption generation with visual attention. In: Proceedings of the 32nd international conference on machine learning, ICML 2015, Lille, France"},{"key":"933_CR168","unstructured":"Yang M, Kim B (2019) BIM: towards quantitative evaluation of interpretability methods with ground truth. arXiv preprint arXiv:1907.09701"},{"key":"933_CR169","doi-asserted-by":"crossref","unstructured":"Yang H, Rudin C, Seltzer MI (2017) Scalable Bayesian rule lists. In: Proceedings of the 34th international conference on machine learning, ICML 2017, Sydney, NSW, Australia","DOI":"10.32614\/CRAN.package.sbrl"},{"key":"933_CR170","unstructured":"Yeh C, Kim B, Arik S\u00d6, et\u00a0al (2020) On completeness-aware concept-based explanations in deep neural networks. In: Advances in neural information processing systems 33: annual conference on neural information processing systems 2020, NeurIPS 2020, virtual"},{"key":"933_CR171","doi-asserted-by":"crossref","unstructured":"Yuan H, Tang J, Hu X, et\u00a0al (2020a) XGNN: towards model-level explanations of graph neural networks. In: KDD \u201920: the 26th ACM SIGKDD conference on knowledge discovery and data mining, virtual event, CA, USA, August (2020)","DOI":"10.1145\/3394486.3403085"},{"key":"933_CR172","unstructured":"Yuan H, Yu H, Gui S, et\u00a0al (2020b) Explainability in graph neural networks: a taxonomic survey. arXiv preprint arXiv:2012.15445"},{"key":"933_CR173","unstructured":"Yuan H, Yu H, Gui S, et\u00a0al (2020c) Explainability in graph neural networks: a taxonomic survey. arXiv preprint arXiv:2012.15445"},{"key":"933_CR174","unstructured":"Zafar MR, Khan NM (2019) DLIME: a deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. arXiv preprint arXiv:1906.10263"},{"key":"933_CR175","doi-asserted-by":"crossref","unstructured":"Zhang Y, Chen X (2020) Explainable recommendation: a survey and new perspectives. Found Trends Inf Retr","DOI":"10.1561\/9781680836592"},{"key":"933_CR176","doi-asserted-by":"crossref","unstructured":"Zhang H, Torres F, Sicre R, et\u00a0al (2023) Opti-cam: optimizing saliency maps for interpretability. CoRR arXiv:2301.07002","DOI":"10.2139\/ssrn.4476687"},{"key":"933_CR177","unstructured":"Zhou Y, Hooker G (2016) Interpreting models via single tree approximation. arXiv preprint arXiv:1610.09036"}],"container-title":["Data Mining and Knowledge Discovery"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10618-023-00933-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10618-023-00933-9\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10618-023-00933-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,21]],"date-time":"2024-10-21T19:27:14Z","timestamp":1729538834000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10618-023-00933-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,3]]},"references-count":177,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2023,9]]}},"alternative-id":["933"],"URL":"https:\/\/doi.org\/10.1007\/s10618-023-00933-9","relation":{},"ISSN":["1384-5810","1573-756X"],"issn-type":[{"value":"1384-5810","type":"print"},{"value":"1573-756X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,6,3]]},"assertion":[{"value":"24 November 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 March 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 June 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval"}},{"value":"Not applicable.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to participate"}},{"value":"The authors declare that they all provide consent for publication.","order":5,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}}]}}