{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,11]],"date-time":"2026-04-11T13:01:36Z","timestamp":1775912496305,"version":"3.50.1"},"reference-count":97,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2022,9,6]],"date-time":"2022-09-06T00:00:00Z","timestamp":1662422400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,9,6]],"date-time":"2022-09-06T00:00:00Z","timestamp":1662422400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100010663","name":"H2020 European Research Council","doi-asserted-by":"publisher","award":["951911"],"award-info":[{"award-number":["951911"]}],"id":[{"id":"10.13039\/100010663","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003475","name":"Hasler Stiftung","doi-asserted-by":"publisher","award":["21042"],"award-info":[{"award-number":["21042"]}],"id":[{"id":"10.13039\/501100003475","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003475","name":"Hasler Stiftung","doi-asserted-by":"publisher","award":["21064"],"award-info":[{"award-number":["21064"]}],"id":[{"id":"10.13039\/501100003475","id-type":"DOI","asserted-by":"publisher"}]},{"name":"University of Applied Sciences and Arts Western Switzerland"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Artif Intell Rev"],"published-print":{"date-parts":[[2023,4]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Since its emergence in the 1960s, Artificial Intelligence (AI) has grown to conquer many technology products and their fields of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from different domains, together with numerous tools to debug, justify outcomes, and establish the safety, fairness and reliability of the models. This variety of tasks has led to inconsistencies in the terminology with, for instance, terms such as <jats:italic>interpretable<\/jats:italic>, <jats:italic>explainable<\/jats:italic> and <jats:italic>transparent<\/jats:italic> being often used interchangeably in methodology papers. These words, however, convey different meanings and are \u201cweighted\" differently across domains, for example in the technical and social sciences. In this paper, we propose an overarching terminology of interpretability of AI systems that can be referred to by the technical developers as much as by the social sciences community to pursue clarity and efficiency in the definition of regulations for ethical and reliable AI development. We show how our taxonomy and definition of interpretable AI differ from the ones in previous research and how they apply with high versatility to several domains and use cases, proposing a\u2014highly needed\u2014standard for the communication among interdisciplinary areas of AI.<\/jats:p>","DOI":"10.1007\/s10462-022-10256-8","type":"journal-article","created":{"date-parts":[[2022,9,6]],"date-time":"2022-09-06T10:04:11Z","timestamp":1662458651000},"page":"3473-3504","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":78,"title":["A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences"],"prefix":"10.1007","volume":"56","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3456-945X","authenticated-orcid":false,"given":"Mara","family":"Graziani","sequence":"first","affiliation":[]},{"given":"Lidia","family":"Dutkiewicz","sequence":"additional","affiliation":[]},{"given":"Davide","family":"Calvaresi","sequence":"additional","affiliation":[]},{"given":"Jos\u00e9 Pereira","family":"Amorim","sequence":"additional","affiliation":[]},{"given":"Katerina","family":"Yordanova","sequence":"additional","affiliation":[]},{"given":"Mor","family":"Vered","sequence":"additional","affiliation":[]},{"given":"Rahul","family":"Nair","sequence":"additional","affiliation":[]},{"given":"Pedro Henriques","family":"Abreu","sequence":"additional","affiliation":[]},{"given":"Tobias","family":"Blanke","sequence":"additional","affiliation":[]},{"given":"Valeria","family":"Pulignano","sequence":"additional","affiliation":[]},{"given":"John O.","family":"Prior","sequence":"additional","affiliation":[]},{"given":"Lode","family":"Lauwaert","sequence":"additional","affiliation":[]},{"given":"Wessel","family":"Reijers","sequence":"additional","affiliation":[]},{"given":"Adrien","family":"Depeursinge","sequence":"additional","affiliation":[]},{"given":"Vincent","family":"Andrearczyk","sequence":"additional","affiliation":[]},{"given":"Henning","family":"M\u00fcller","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,9,6]]},"reference":[{"key":"10256_CR1","unstructured":"A\u00efvodji U, Arai H, Fortineau O, Gambs S, Hara S, Tapp A (2019) Fairwashing: the risk of rationalization. In: International conference on machine learning. PMLR, pp 161\u2013170"},{"key":"10256_CR2","doi-asserted-by":"publisher","first-page":"52138","DOI":"10.1109\/ACCESS.2018.2870052","volume":"6","author":"A Adadi","year":"2018","unstructured":"Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE Access 6:52138\u201352160","journal-title":"IEEE Access"},{"key":"10256_CR3","unstructured":"Arya V, Bellamy RKE, Chen P-Y, Dhurandhar A, Hind M, Hoffman SC, Houde S, Liao QV, Luss R, Mojsilovi\u0107 A et\u00a0al (2019) One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012"},{"issue":"6","key":"10256_CR4","doi-asserted-by":"publisher","first-page":"e15154","DOI":"10.2196\/15154","volume":"22","author":"O Asan","year":"2020","unstructured":"Asan O, Bayrak AE, Choudhury A (2020) Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res 22(6):e15154","journal-title":"J Med Internet Res"},{"issue":"3","key":"10256_CR5","doi-asserted-by":"publisher","first-page":"973","DOI":"10.1177\/1461444816676645","volume":"20","author":"Mike Ananny","year":"2018","unstructured":"Ananny Mike, Crawford Kate (2018) Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc 20(3):973\u2013989","journal-title":"New Media Soc"},{"key":"10256_CR6","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1016\/j.inffus.2019.12.012","volume":"58","author":"AB Arrieta","year":"2020","unstructured":"Arrieta AB, D\u00edaz-Rodr\u00edguez N, Del\u00a0Ser J, Bennetot A, Tabik S, Barbado A, Garc\u00eda S, Gil-L\u00f3pez S, Molina D, Benjamins R et al (2020) Explainable artificial intelligence (xai): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform Fusion 58:82\u2013115","journal-title":"Inform Fusion"},{"key":"10256_CR7","unstructured":"Anjomshoae S, Najjar A, Calvaresi D, Fr\u00e4mling K (2019) Explainable agents and robots: results from a systematic literature review. In: 18th International Conference on autonomous agents and multiagent systems (AAMAS 2019), Montreal, May 13\u201317, 2019. International Foundation for Autonomous Agents and Multiagent Systems, 2019, pp 1078\u20131088"},{"key":"10256_CR8","unstructured":"Biran O, Cotton C (2017) Explanation and justification in machine learning: a survey. In: IJCAI-17 workshop on explainable AI (XAI) 8:8\u201313"},{"key":"10256_CR9","doi-asserted-by":"crossref","unstructured":"Banja JD, Hollstein RD, Bruno MA (2022) When artificial intelligence models surpass physician performance: medical malpractice liability in an era of advanced artificial intelligence. J Am Coll Radiol","DOI":"10.1016\/j.jacr.2021.11.014"},{"key":"10256_CR10","first-page":"97","volume":"14","author":"TR Besold","year":"2015","unstructured":"Besold TR, K\u00fchnberger K-U (2015) Towards integrated neural-symbolic systems for human-level AI. Two research programs helping to bridge the gaps. Biol Inspir Cognit Archit 14:97\u2013110","journal-title":"Biol Inspir Cognit Archit"},{"key":"10256_CR11","doi-asserted-by":"crossref","unstructured":"Bibal A, Lognoul M, de\u00a0Streel A, Fr\u00e9nay B (2020) Impact of legal requirements on explainability in machine learning. arXiv preprint arXiv:2007.05479","DOI":"10.1007\/s10506-020-09270-4"},{"key":"10256_CR12","first-page":"1","volume":"29","author":"A Bibal","year":"2020","unstructured":"Bibal A, Lognoul M, de Streel A, Fr\u00e9nay B (2020) Legal requirements on explainability in machine learning. Artif Intell Law 29:1\u201321","journal-title":"Artif Intell Law"},{"key":"10256_CR13","doi-asserted-by":"publisher","first-page":"27","DOI":"10.3389\/fmed.2020.00027","volume":"7","author":"G Briganti","year":"2020","unstructured":"Briganti G, Le Moine O (2020) Artificial intelligence in medicine: today and tomorrow. Front Med 7:27","journal-title":"Front Med"},{"issue":"9","key":"10256_CR14","doi-asserted-by":"publisher","first-page":"3753","DOI":"10.1007\/s10115-020-01473-0","volume":"62","author":"M Chakraborty","year":"2020","unstructured":"Chakraborty M, Biswas SK, Purkayastha B (2020) Rule extraction from neural network trained using deep belief network and back propagation. Knowl Inform Syst 62(9):3753\u20133781","journal-title":"Knowl Inform Syst"},{"key":"10256_CR15","doi-asserted-by":"crossref","unstructured":"Calvaresi D, Ciatto G, Najjar A, Aydogan R, Van\u00a0der Torre L, Omicini A, Schumacher M (2021) Expectation: personalized explainable artificial intelligence for decentralized agents with heterogeneous knowledge. In: international workshop on explainable and transparent AI and multi-agent systems, Springer","DOI":"10.1007\/978-3-030-82017-6_20"},{"key":"10256_CR16","unstructured":"Ciatto G, Calegari R, Omicini A, Calvaresi D (2019) Towards XMAS: explainability through multi-agent systems. In: Claudio S, Giancarlo F, Giovanni C, and Andrea O, (eds). Proceedings of the 1st workshop on artificial intelligence and internet of things co-located with the 18th international conference of the italian association for artificial intelligence (AI*IA 2019), Rende (CS), November 22, 2019, volume 2502 of CEUR Workshop Proceedings, pp 40\u201353. CEUR-WS.org,"},{"key":"10256_CR17","doi-asserted-by":"crossref","unstructured":"Clinciu M-A, Hastie H (2019) A survey of explainable ai terminology. In: Proceedings of the 1st workshop on interactive natural language technology for explainable artificial intelligence (NL4XAI 2019), pp 8\u201313","DOI":"10.18653\/v1\/W19-8403"},{"key":"10256_CR18","doi-asserted-by":"crossref","unstructured":"Caruana R, Lou Y, Gehrke J, Koch P, Sturm M, Elhadad N (2015) Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp 1721\u20131730","DOI":"10.1145\/2783258.2788613"},{"key":"10256_CR19","doi-asserted-by":"crossref","unstructured":"Calvaresi D, Marinoni M, Sturm A, Schumacher M, Buttazzo G (2017) The challenge of real-time multi-agent systems for enabling IOT and CPS. In: Proceedings of the international conference on web intelligence, pp 356\u2013364","DOI":"10.1145\/3106426.3106518"},{"key":"10256_CR20","doi-asserted-by":"crossref","unstructured":"Coeckelbergh Mark (2020) AI ethics. MIT Press","DOI":"10.7551\/mitpress\/12549.001.0001"},{"key":"10256_CR21","unstructured":"Chromik M, Schuessler M (2020) A taxonomy for human subject evaluation of black-box explanations in xai. In: ExSS-ATEC@ IUI, p\u00a01"},{"key":"10256_CR22","doi-asserted-by":"crossref","unstructured":"Ciatto G, Schumacher MI, Omicini A, Calvaresi D (2020) Agent-based explanations in ai: towards an abstract framework. In: International workshop on explainable, transparent autonomous agents and multi-agent systems. Springer, pp 3\u201320","DOI":"10.1007\/978-3-030-51924-7_1"},{"key":"10256_CR23","doi-asserted-by":"publisher","first-page":"1433","DOI":"10.1126\/science.aba9647","volume":"3586498","author":"D Coyle","year":"2020","unstructured":"Coyle D, Weller A (2020) \u201cExplaining\u201d machine learning reveals policy challenges. Science 3586498:1433\u20131434","journal-title":"Science"},{"issue":"1","key":"10256_CR24","first-page":"7","volume":"1","author":"S Dick","year":"2019","unstructured":"Dick S (2019) Artificial intelligence. Harvard Data Sci Rev 1(1):7","journal-title":"Harvard Data Sci Rev"},{"key":"10256_CR25","unstructured":"De\u00a0Raedt L, Manhaeve R, Dumancic S, Demeester T, Kimmig A (2019) Neuro-symbolic= neural+ logical+ probabilistic. In NeSy\u201919@ IJCAI, the 14th International Workshop on Neural-Symbolic Learning and Reasoning"},{"key":"10256_CR26","unstructured":"Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608"},{"key":"10256_CR27","first-page":"18","volume":"16","author":"L Edwards","year":"2017","unstructured":"Edwards L, Veale M (2017) Slave to the algorithm: why a right to an explanation is probably not the remedy you are looking for. Duke L Tech Rev 16:18","journal-title":"Duke L Tech Rev"},{"issue":"4","key":"10256_CR28","doi-asserted-by":"publisher","first-page":"689","DOI":"10.1007\/s11023-018-9482-5","volume":"28","author":"L Floridi","year":"2018","unstructured":"Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F et al (2018) Ai4people-an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach 28(4):689\u2013707","journal-title":"Minds Mach"},{"key":"10256_CR29","doi-asserted-by":"crossref","unstructured":"Franklin S, Graesser A (1996) Is it an agent, or just a program?: A taxonomy for autonomous agents. In: International workshop on agent theories, architectures, and languages. Springer, pp 21\u201335","DOI":"10.1007\/BFb0013570"},{"key":"10256_CR30","unstructured":"Frosst N, Hinton G (2017) Distilling a neural network into a soft decision tree. In: Proceedings of the first international workshop on comprehensibility and explanation in AI and ML 2017, Co-located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017)"},{"key":"10256_CR31","doi-asserted-by":"publisher","first-page":"103865","DOI":"10.1016\/j.compbiomed.2020.103865","volume":"123","author":"M Graziani","year":"2020","unstructured":"Graziani M, Andrearczyk V, Marchand-Maillet S, M\u00fcller H (2020) Concept attribution: explaining CNN decisions to physicians. Comput Biol Med 123:103865","journal-title":"Comput Biol Med"},{"key":"10256_CR32","unstructured":"Goodman B, Flaxman S (2016) Eu regulations on algorithmic decision-making and a \u201dright to explanation\u201d. In ICML workshop on human interpretability in machine learning (WHI 2016), New York. arXiv. org\/abs\/1606.08813 v1"},{"issue":"5","key":"10256_CR33","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3236009","volume":"51","author":"R Guidotti","year":"2018","unstructured":"Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2018) A survey of methods for explaining black box models. ACM Comput Surv (CSUR) 51(5):1\u201342","journal-title":"ACM Comput Surv (CSUR)"},{"key":"10256_CR34","unstructured":"Graziani M (2021) Interpretability of deep learning for medical image classification: improved understandability and generalization. PhD thesis, University of Geneva"},{"key":"10256_CR35","unstructured":"Goodfellow IJ, Shlens J, Szegedy C(2015) Explaining and Harnessing Adversarial Examples. In: Yoshua B and Yann L, (eds), 3rd International conference on learning representations, ICLR 2015, San Diego, May 7-9, 2015, Conference track proceedings, pp 1\u201311"},{"issue":"1","key":"10256_CR36","doi-asserted-by":"publisher","first-page":"65","DOI":"10.1037\/0033-2909.107.1.65","volume":"107","author":"DJ Hilton","year":"1990","unstructured":"Hilton DJ (1990) Conversational processes and causal explanation. Psychol Bull 107(1):65","journal-title":"Psychol Bull"},{"key":"10256_CR37","doi-asserted-by":"crossref","unstructured":"Hilton D (2017) Social attribution and explanation","DOI":"10.1093\/oxfordhb\/9780199399550.013.33"},{"key":"10256_CR38","doi-asserted-by":"crossref","unstructured":"Hamon R, Junklewitz H, Malgieri G, Hert PD, Beslay L, Sanchez I (2021) Impossible explanations? beyond explainable AI in the GDPR from a covid-19 use case scenario. In: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp 549\u2013559","DOI":"10.1145\/3442188.3445917"},{"key":"10256_CR39","unstructured":"Hinton G, Vinyals O, Dean J (2015) Distilling the Knowledge in a Neural Network. In nips deep learning and representation learning workshop, pp 1\u20139"},{"key":"10256_CR40","unstructured":"Kim B, Khanna R, Koyejo OO (2016) Examples are not enough, learn to criticize! criticism for interpretability. In: D.\u00a0Lee, M.\u00a0Sugiyama, U.\u00a0Luxburg, I.\u00a0Guyon, and R.\u00a0Garnett (eds). Advances in neural information processing systems, volume\u00a029. Curran Associates, Inc, pp 1\u20139"},{"key":"10256_CR41","unstructured":"Koh PW, Liang P (2017) Understanding black-box predictions via influence functions. In: Doina P and Yee\u00a0WT (eds). Proceedings of the 34th international conference on machine learning, volume\u00a070 of Proceedings of Machine Learning Research. PMLR, 06\u201311 Aug, pp 1885\u20131894"},{"key":"10256_CR42","doi-asserted-by":"crossref","unstructured":"Kaur H, Nori H, Jenkins S, Caruana R, Wallach H, Wortman\u00a0VJ (2020) Interpreting interpretability: understanding data scientists\u2019 use of interpretability tools for machine learning. In: Proceedings of the 2020 CHI conference on human factors in computing systems, pp 1\u201314","DOI":"10.1145\/3313831.3376219"},{"key":"10256_CR43","unstructured":"Kim B, Wattenberg M, Gilmer J, Cai C, Wexler J, Viegas F, Sayres R (2018) Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: Jennifer D and Andreas K (eds). Proceedings of the 35th international conference on machine learning, volume\u00a080 of Proceedings of Machine Learning Research. PMLR, 10\u201315 Jul, pp 2668\u20132677"},{"key":"10256_CR44","doi-asserted-by":"crossref","unstructured":"Lakkaraju H, Bach SH, Leskovec J (2016) Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1675\u20131684","DOI":"10.1145\/2939672.2939874"},{"key":"10256_CR45","first-page":"07","volume":"10","author":"S Lapuschkin","year":"2015","unstructured":"Lapuschkin S, Binder A, Montavon G, Klauschen F, M\u00fcller K-R, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10:07","journal-title":"PLoS ONE"},{"issue":"3","key":"10256_CR46","doi-asserted-by":"publisher","first-page":"31","DOI":"10.1145\/3236386.3241340","volume":"16","author":"ZC Lipton","year":"2018","unstructured":"Lipton ZC (2018) The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16(3):31\u201357","journal-title":"Queue"},{"key":"10256_CR47","unstructured":"Lundberg SM, Lee SI (2017) A unified approach to interpreting model predictions. In: Proceedings of the 31st international conference on neural information processing systems, NIPS\u201917, Red Hook. Curran Associates Inc, pp 4768\u20134777"},{"issue":"10","key":"10256_CR48","doi-asserted-by":"publisher","first-page":"464","DOI":"10.1016\/j.tics.2006.08.004","volume":"10","author":"T Lombrozo","year":"2006","unstructured":"Lombrozo T (2006) The structure and function of explanations. Trends Cognit Sci 10(10):464\u2013470","journal-title":"Trends Cognit Sci"},{"key":"10256_CR49","unstructured":"Miller T, Howe P, Sonenberg L (2017) Explainable AI: beware of inmates running the asylum or: How i learnt to stop worrying and love the social and behavioural sciences. arXiv preprint arXiv:1712.00547"},{"key":"10256_CR50","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.artint.2018.07.007","volume":"267","author":"T Miller","year":"2019","unstructured":"Miller T (2019) Explanation in artificial intelligence: insights from the social sciences. Artif Intell 267:1\u201338","journal-title":"Artif Intell"},{"key":"10256_CR51","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1016\/j.patcog.2016.11.008","volume":"65","author":"G Montavon","year":"2017","unstructured":"Montavon G, Lapuschkin S, Binder A, Samek W, M\u00fcller KR (2017) Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognit 65:211\u2013222","journal-title":"Pattern Recognit"},{"key":"10256_CR52","unstructured":"Molnar C (2019) Interpretable machine learning: a guide for making black box models explainable. Leanpub, https:\/\/christophm.github.io\/interpretable-ml-book(visited 15 May 2021)"},{"key":"10256_CR53","doi-asserted-by":"crossref","unstructured":"Mittelstadt B, Russell C, Wachter S (2019) Explaining explanations in AI. In: Proceedings of the conference on fairness, accountability, and transparency, pp 279\u2013288","DOI":"10.1145\/3287560.3287574"},{"key":"10256_CR54","doi-asserted-by":"crossref","unstructured":"Murdoch WJ, Singh C, Kumbier K, Abbasi-Asl R, Yu B (2019) Interpretable machine learning: definitions, methods, and applications. arXiv preprint arXiv:1901.04592","DOI":"10.1073\/pnas.1900654116"},{"key":"10256_CR55","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.dsp.2017.10.011","volume":"73","author":"G Montavon","year":"2018","unstructured":"Montavon G, Samek W, M\u00fcller KR (2018) Methods for interpreting and understanding deep neural networks. Digital Signal Process 73:1\u201315","journal-title":"Digital Signal Process"},{"issue":"4","key":"10256_CR56","doi-asserted-by":"publisher","first-page":"1074","DOI":"10.1016\/j.ijrobp.2018.08.032","volume":"102","author":"O Morin","year":"2018","unstructured":"Morin O, Valli\u00e8res M, Jochems A, Woodruff HC, Valdes G, Braunstein SE, Wildberger JE, Villanueva-Meyer JE, Kearney V, Solberg TD, Lambin P (2018) A deep look into the future of quantitative imaging in oncology: a statement of working principles and proposal for change. Int J Radiat Oncol Biol Phys 102(4):1074\u20131082","journal-title":"Int J Radiat Oncol Biol Phys"},{"key":"10256_CR57","unstructured":"Nguyen A, Dosovitskiy A, Yosinski J, Brox T, Clune J (2016) Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS\u201916, Curran Associates Inc, Red Hook, pp 3395\u20133403"},{"issue":"4","key":"10256_CR58","doi-asserted-by":"publisher","first-page":"32","DOI":"10.1162\/DAED_a_00113","volume":"140","author":"H Nissenbaum","year":"2011","unstructured":"Nissenbaum H (2011) A contextual approach to privacy online. Daedalus 140(4):32\u201348","journal-title":"Daedalus"},{"key":"10256_CR59","unstructured":"Nguyen A, Mart\u00ednez MR (2019) Mononet: towards interpretable models by learning monotonic features. Human-Centric Machine Learning workshop, NeurIPS"},{"key":"10256_CR60","doi-asserted-by":"publisher","DOI":"10.1007\/978-4-431-55040-2","volume-title":"Conversational informatics","author":"T Nishida","year":"2014","unstructured":"Nishida T, Atsushi N, Yoshimasa O, Yasser M (2014) Conversational informatics. Springer, New York"},{"key":"10256_CR61","unstructured":"Omicini A (2020) Not just for humans: explanation for agent-to-agent communication. In Giuseppe V, Matteo P, and Andrea Or, (eds). Proceedings of the AIxIA 2020 discussion papers workshop co-located with the the 19th international conference of the Italian Association for Artificial Intelligence (AIxIA2020), Anywhere, November 27th, 2020, volume 2776 of CEUR Workshop Proceedings. CEUR-WS.org, pp 1\u201311"},{"key":"10256_CR62","doi-asserted-by":"crossref","unstructured":"Olah C, Mordvintsev A, Schubert L (2017) Feature visualization. Distill, https:\/\/distill.pub\/2017\/feature-visualization","DOI":"10.23915\/distill.00007"},{"key":"10256_CR63","doi-asserted-by":"crossref","unstructured":"Amorim JP, Abreu PH, Fern\u00e1ndez A, Reyes M, Santos J, Abreu MH (2021) Interpreting deep machine learning models: an easy guide for oncologists. IEEE Rev Biomed Eng, pp. 1\u201316","DOI":"10.1109\/RBME.2021.3131358"},{"issue":"1","key":"10256_CR64","doi-asserted-by":"publisher","first-page":"6","DOI":"10.1148\/radiol.2020200038","volume":"297","author":"OS Pianykh","year":"2020","unstructured":"Pianykh OS, Langs G, Dewey M, Enzmann DR, Herold CJ, Schoenberg SO, Brink JA (2020) Continuous learning AI in radiology: implementation principles and early applications. Radiology 297(1):6\u201314","journal-title":"Radiology"},{"key":"10256_CR65","doi-asserted-by":"crossref","unstructured":"Palacio S, Lucieri A, Munir M, Hees J, Ahmed S, Dengel A (2021) Xai handbook: towards a unified framework for explainable AI. arXiv preprint arXiv:2105.06677","DOI":"10.1109\/ICCVW54120.2021.00420"},{"issue":"3","key":"10256_CR66","first-page":"e190043","volume":"2","author":"M Reyes","year":"2020","unstructured":"Reyes M, Meier R, Pereira S, Silva CA, Dahlweid FM, von Tengg-Kobligk H, Summers RM, Wiest R (2020) On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiology 2(3):e190043","journal-title":"Radiology"},{"key":"10256_CR67","unstructured":"Russell S, Norvig P(2002) Artificial intelligence: a modern approach"},{"issue":"4","key":"10256_CR68","doi-asserted-by":"publisher","first-page":"495","DOI":"10.1007\/s11023-019-09509-3","volume":"29","author":"S Robbins","year":"2019","unstructured":"Robbins S (2019) A misdirected principle with a catch: explicability for AI. Minds Mach 29(4):495\u2013514","journal-title":"Minds Mach"},{"key":"10256_CR69","unstructured":"Riveret R, Pitt JV, Korkinof D, Draief M (2015) Neuro-symbolic agents: Boltzmann machines and probabilistic abstract argumentation with sub-arguments. In AAMAS, pp 1481\u20131489"},{"issue":"6","key":"10256_CR70","doi-asserted-by":"publisher","first-page":"673","DOI":"10.1007\/s10458-019-09408-y","volume":"33","author":"A Rosenfeld","year":"2019","unstructured":"Rosenfeld A, Richardson A (2019) Explainability in human-agent systems. Autonom Agents Multi-Agent Syst 33(6):673\u2013705","journal-title":"Autonom Agents Multi-Agent Syst"},{"key":"10256_CR71","doi-asserted-by":"crossref","unstructured":"Ribeiro MT, Singh S, Guestrin C (2016) \"why should i trust you?\": explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, KDD \u201916, New York. Association for Computing Machinery, pp 1135\u20131144","DOI":"10.1145\/2939672.2939778"},{"issue":"5","key":"10256_CR72","doi-asserted-by":"publisher","first-page":"206","DOI":"10.1038\/s42256-019-0048-x","volume":"1","author":"Cynthia Rudin","year":"2019","unstructured":"Rudin Cynthia (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206\u2013215","journal-title":"Nat Mach Intell"},{"key":"10256_CR73","doi-asserted-by":"crossref","unstructured":"Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-CAM: visual Explanations from deep networks via gradient-based localization. In: 2017 IEEE international conference on computer vision (ICCV), vol 128, pp 618\u2013626","DOI":"10.1109\/ICCV.2017.74"},{"key":"10256_CR74","doi-asserted-by":"publisher","DOI":"10.1002\/9781118884614","volume-title":"Multi-agent machine learning: a reinforcement approach","author":"HM Schwartz","year":"2014","unstructured":"Schwartz HM (2014) Multi-agent machine learning: a reinforcement approach. Wiley, New York"},{"key":"10256_CR75","unstructured":"Simpson J (2009) Oxford English dictionary"},{"key":"10256_CR76","unstructured":"Selbst A, Powles J (2018) \u201dmeaningful information\u201d and the right to explanation. In: Conference on fairness, accountability and transparency. PMLR, pp 48\u201348"},{"key":"10256_CR77","doi-asserted-by":"crossref","unstructured":"Stammer W, Schramowski P, Kersting K (2021) Right for the right concept: revising neuro-symbolic concepts by interacting with their explanations. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 3619\u20133629","DOI":"10.1109\/CVPR46437.2021.00362"},{"key":"10256_CR78","doi-asserted-by":"crossref","unstructured":"Searle JR, Searle PGW, Searle JR et al (1969) Speech acts: an essay in the philosophy of language, vol 626. Cambridge University Press","DOI":"10.1017\/CBO9781139173438"},{"key":"10256_CR79","unstructured":"Sundararajan M, Taly A, Yan Q (2017) Axiomatic attribution for deep networks. In: International Conference on Machine Learning. PMLR, pp 3319\u20133328"},{"key":"10256_CR80","unstructured":"Simonyan K, Vedaldi A, Zisserman A (2014) Deep inside convolutional networks: Visualising image classification models and saliency maps. In Yoshua B and Yann L (eds.), 2nd international conference on learning representations, ICLR 2014, Banff, April 14-16, 2014, Workshop Track Proceedings"},{"key":"10256_CR81","doi-asserted-by":"crossref","unstructured":"Sarker MdK, Zhou L, Eberhart A, Hitzler P (2021) Neuro-symbolic artificial intelligence current trends. arXiv preprint arXiv:2105.05330","DOI":"10.3233\/AIC-210084"},{"key":"10256_CR82","unstructured":"Tomsett R, Braines D, Harborne D, Preece A, Chakraborty S(2018) Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. ICML Workshop on Human Interpretability in Machine Learning"},{"key":"10256_CR83","doi-asserted-by":"crossref","unstructured":"Tjoa E, Guan C (2020) A survey on explainable artificial intelligence (XAI): towards medical XAI. IEEE transactions on neural networks and learning systems","DOI":"10.1109\/TNNLS.2020.3027314"},{"key":"10256_CR84","unstructured":"Tonekaboni S, Joshi S, McCradden MD, Goldenberg A (2019) What clinicians want: contextualizing explainable machine learning for clinical end use. In: Machine learning for healthcare conference. PMLR, pp 359\u2013380"},{"key":"10256_CR85","volume-title":"Undecidable theories","author":"A Tarski","year":"1953","unstructured":"Tarski A, Mostowski A, Robinson RM (1953) Undecidable theories, vol 13. Elsevier, Amsterdam"},{"issue":"3","key":"10256_CR86","doi-asserted-by":"publisher","first-page":"264","DOI":"10.1109\/THMS.2020.2988859","volume":"50","author":"M Vered","year":"2020","unstructured":"Vered M, Howe P, Miller T, Sonenberg L, Velloso E (2020) Demand-driven transparency for monitoring intelligent agents. IEEE Trans Hum-Mach Syst 50(3):264\u2013275","journal-title":"IEEE Trans Hum-Mach Syst"},{"key":"10256_CR87","unstructured":"Vilone G, Longo L (2020) Explainable artificial intelligence: a systematic review. arXiv preprint arXiv:2006.00093"},{"key":"10256_CR88","doi-asserted-by":"crossref","unstructured":"Verma H, Schaer R, Reichenbach J, Jreige M, Prior JO, Ev\u00e9quoz F, Depeursinge A. (2021) On improving physicians\u2019 trust in AI: Qualitative inquiry with imaging experts in the oncological domain. BMC Medical Imaging, in review","DOI":"10.21203\/rs.3.rs-496758\/v1"},{"key":"10256_CR89","doi-asserted-by":"crossref","unstructured":"Ward J (2019) The student\u2019s guide to cognitive neuroscience. Routledge","DOI":"10.4324\/9781351035187"},{"key":"10256_CR90","doi-asserted-by":"crossref","unstructured":"Weller A (2019) Transparency: motivations and challenges. In: Explainable AI: interpreting, explaining and visualizing deep learning, Springer, pp 23\u201340","DOI":"10.1007\/978-3-030-28954-6_2"},{"key":"10256_CR91","doi-asserted-by":"crossref","unstructured":"Whitworth B (2006) Social-technical systems. In: Encyclopedia of human computer interaction, IGI Global, pp 533\u2013541","DOI":"10.4018\/978-1-59140-562-7.ch079"},{"issue":"1","key":"10256_CR92","doi-asserted-by":"publisher","first-page":"4","DOI":"10.1148\/radiol.2020192224","volume":"295","author":"MJ Willemink","year":"2020","unstructured":"Willemink MJ, Koszek WA, Hardell C, Wu J, Fleischmann D, Harvey H, Folio LR, Summers RM, Rubin DL, Lungren MP (2020) Preparing medical imaging data for machine learning. Radiology 295(1):4\u201315","journal-title":"Radiology"},{"issue":"2","key":"10256_CR93","doi-asserted-by":"publisher","first-page":"76","DOI":"10.1093\/idpl\/ipx005","volume":"7","author":"S Wachter","year":"2017","unstructured":"Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int Data Privacy Law 7(2):76\u201399","journal-title":"Int Data Privacy Law"},{"key":"10256_CR94","first-page":"841","volume":"31","author":"S Wachter","year":"2017","unstructured":"Wachter S, Mittelstadt B, Russell C (2017) Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv JL Tech 31:841","journal-title":"Harv JL Tech"},{"key":"10256_CR95","unstructured":"Yeh C-K, Hsieh C-Y, Suggala AS, Inouye DI, Ravikumar P (2019) On the (in)fidelity and sensitivity for explanations. arXiv preprint arXiv:1901.09392"},{"key":"10256_CR96","doi-asserted-by":"crossref","unstructured":"Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A (2016) Learning deep features for discriminative localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2921\u20132929","DOI":"10.1109\/CVPR.2016.319"},{"key":"10256_CR97","doi-asserted-by":"crossref","unstructured":"Zhang Y, Liao QV, Bellamy RKE (2020) Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In: Proceedings of the 2020 conference on fairness, accountability, and transparency, pp 295\u2013305","DOI":"10.1145\/3351095.3372852"}],"container-title":["Artificial Intelligence Review"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10462-022-10256-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10462-022-10256-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10462-022-10256-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,3,26]],"date-time":"2023-03-26T22:21:57Z","timestamp":1679869317000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10462-022-10256-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,6]]},"references-count":97,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2023,4]]}},"alternative-id":["10256"],"URL":"https:\/\/doi.org\/10.1007\/s10462-022-10256-8","relation":{},"ISSN":["0269-2821","1573-7462"],"issn-type":[{"value":"0269-2821","type":"print"},{"value":"1573-7462","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,9,6]]},"assertion":[{"value":"6 September 2022","order":1,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}