{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,16]],"date-time":"2026-02-16T10:24:40Z","timestamp":1771237480794,"version":"3.50.1"},"publisher-location":"Cham","reference-count":42,"publisher":"Springer International Publishing","isbn-type":[{"value":"9783031040825","type":"print"},{"value":"9783031040832","type":"electronic"}],"license":[{"start":{"date-parts":[[2022,1,1]],"date-time":"2022-01-01T00:00:00Z","timestamp":1640995200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,4,17]],"date-time":"2022-04-17T00:00:00Z","timestamp":1650153600000},"content-version":"vor","delay-in-days":106,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2022]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The success of statistical machine learning from big data, especially of deep learning, has made artificial intelligence (AI) very popular. Unfortunately, especially with the most successful methods, the results are very difficult to comprehend by human experts. The application of AI in areas that impact human life (e.g., agriculture, climate, forestry, health, etc.) has therefore led to an demand for trust, which can be fostered if the methods can be interpreted and thus explained to humans. The research field of explainable artificial intelligence (XAI) provides the necessary foundations and methods. Historically, XAI has focused on the development of methods to explain the decisions and internal mechanisms of complex AI systems, with much initial research concentrating on explaining how convolutional neural networks produce image classification predictions by producing visualizations which highlight what input patterns are most influential in activating hidden units, or are most responsible for a model\u2019s decision. In this volume, we summarize research that outlines and takes next steps towards a broader vision for explainable AI in moving beyond explaining classifiers via such methods, to include explaining other kinds of models (e.g., unsupervised and reinforcement learning models) via a diverse array of XAI techniques (e.g., question-and-answering systems, structured explanations). In addition, we also intend to move beyond simply providing model explanations to directly improving the transparency, efficiency and generalization ability of models. We hope this volume presents not only exciting research developments in explainable AI but also a guide for what next areas to focus on within this fascinating and highly relevant research field as we enter the second decade of the deep learning revolution. This volume is an outcome of the ICML 2020 workshop on \u201cXXAI: Extending Explainable AI Beyond Deep Models and Classifiers.\u201d<\/jats:p>","DOI":"10.1007\/978-3-031-04083-2_1","type":"book-chapter","created":{"date-parts":[[2022,4,16]],"date-time":"2022-04-16T17:03:23Z","timestamp":1650128603000},"page":"3-10","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":29,"title":["xxAI - Beyond Explainable Artificial Intelligence"],"prefix":"10.1007","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6786-5194","authenticated-orcid":false,"given":"Andreas","family":"Holzinger","sequence":"first","affiliation":[]},{"given":"Randy","family":"Goebel","sequence":"additional","affiliation":[]},{"given":"Ruth","family":"Fong","sequence":"additional","affiliation":[]},{"given":"Taesup","family":"Moon","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3861-7685","authenticated-orcid":false,"given":"Klaus-Robert","family":"M\u00fcller","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6283-3265","authenticated-orcid":false,"given":"Wojciech","family":"Samek","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,4,17]]},"reference":[{"key":"1_CR1","unstructured":"Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: NeurIPS (2018)"},{"key":"1_CR2","unstructured":"Adebayo, J., Muelly, M., Liccardi, I., Kim, B.: Debugging tests for model explanations. In: NeurIPS (2020)"},{"key":"1_CR3","doi-asserted-by":"crossref","unstructured":"Bach, S., Binder, A., Montavon, G., Klauschen, F., M\u00fcller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)","DOI":"10.1371\/journal.pone.0130140"},{"key":"1_CR4","doi-asserted-by":"crossref","unstructured":"Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: quantifying interpretability of deep visual representations. In: CVPR (2017)","DOI":"10.1109\/CVPR.2017.354"},{"issue":"7","key":"1_CR5","doi-asserted-by":"publisher","first-page":"58","DOI":"10.1145\/3448250","volume":"64","author":"Y Bengio","year":"2021","unstructured":"Bengio, Y., Lecun, Y., Hinton, G.: Deep learning for AI. Commun. ACM 64(7), 58\u201365 (2021). https:\/\/doi.org\/10.1145\/3448250","journal-title":"Commun. ACM"},{"key":"1_CR6","unstructured":"Brendel, W., Bethge, M.: Approximating CNNs with bag-of-local-features models works surprisingly well on ImageNet. In: ICLR (2019)"},{"key":"1_CR7","unstructured":"Chen, C., Li, O., Tao, D., Barnett, A., Rudin, C., Su, J.K.: This looks like that: deep learning for interpretable image recognition. In: NeurIPS (2019)"},{"key":"1_CR8","doi-asserted-by":"crossref","unstructured":"Fong, R., Patrick, M., Vedaldi, A.: Understanding deep networks via extremal perturbations and smooth masks. In: ICCV (2019)","DOI":"10.1109\/ICCV.2019.00304"},{"key":"1_CR9","doi-asserted-by":"crossref","unstructured":"Fong, R., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation. In: ICCV (2017)","DOI":"10.1109\/ICCV.2017.371"},{"key":"1_CR10","doi-asserted-by":"crossref","unstructured":"Fong, R., Vedaldi, A.: Net2Vec: quantifying and explaining how concepts are encoded by filters in deep neural networks. In: Proceedings of the CVPR (2018)","DOI":"10.1109\/CVPR.2018.00910"},{"key":"1_CR11","unstructured":"Hoffmann, A., Fanconi, C., Rade, R., Kohler, J.: This looks like that... does it? Shortcomings of latent space prototype interpretability in deep networks. In: ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI (2021)"},{"key":"1_CR12","doi-asserted-by":"publisher","unstructured":"Holzinger, A., Carrington, A., M\u00fcller, H.: Measuring the quality of explanations: the System Causability Scale (SCS). KI - K\u00fcnstliche Intelligenz 34(2), 193\u2013198 (2020). https:\/\/doi.org\/10.1007\/s13218-020-00636-z","DOI":"10.1007\/s13218-020-00636-z"},{"issue":"3","key":"1_CR13","doi-asserted-by":"publisher","first-page":"263","DOI":"10.1016\/j.inffus.2021.10.007","volume":"79","author":"A Holzinger","year":"2022","unstructured":"Holzinger, A., et al.: Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence. Inf. Fusion 79(3), 263\u2013278 (2022). https:\/\/doi.org\/10.1016\/j.inffus.2021.10.007","journal-title":"Inf. Fusion"},{"issue":"7","key":"1_CR14","doi-asserted-by":"publisher","first-page":"28","DOI":"10.1016\/j.inffus.2021.01.008","volume":"71","author":"A Holzinger","year":"2021","unstructured":"Holzinger, A., Malle, B., Saranti, A., Pfeifer, B.: Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI. Inf. Fusion 71(7), 28\u201337 (2021). https:\/\/doi.org\/10.1016\/j.inffus.2021.01.008","journal-title":"Inf. Fusion"},{"key":"1_CR15","doi-asserted-by":"crossref","unstructured":"Holzinger, A., Saranti, A., Molnar, C., Biececk, P., Samek, W.: Explainable AI methods - a brief overview. In: Holzinger, A., et al. (eds.) xxAI 2020. LNAI, vol. 13200, pp. 13\u201338. Springer, Cham (2022)","DOI":"10.1007\/978-3-031-04083-2_2"},{"key":"1_CR16","unstructured":"Hooker, S., Erhan, D., Kindermans, P.J., Kim, B.: A benchmark for interpretability methods in deep neural networks. In: NeurIPS (2019)"},{"issue":"10","key":"1_CR17","doi-asserted-by":"publisher","first-page":"2585","DOI":"10.1007\/s10115-021-01605-0","volume":"63","author":"X Hu","year":"2021","unstructured":"Hu, X., Chu, L., Pei, J., Liu, W., Bian, J.: Model complexity of deep learning: a survey. Knowl. Inf. Syst. 63(10), 2585\u20132619 (2021). https:\/\/doi.org\/10.1007\/s10115-021-01605-0","journal-title":"Knowl. Inf. Syst."},{"key":"1_CR18","unstructured":"Kim, B., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: Proceedings of the ICML (2018)"},{"key":"1_CR19","unstructured":"Koh, P.W., et al.: Concept bottleneck models. In: ICML (2020)"},{"key":"1_CR20","unstructured":"Lakkaraju, H., Arsov, N., Bastani, O.: Robust and stable black box explanations. In: Daum\u00e9, H., Singh, A. (eds.) International Conference on Machine Learning (ICML 2020), pp. 5628\u20135638. PMLR (2020)"},{"issue":"3","key":"1_CR21","doi-asserted-by":"publisher","first-page":"233","DOI":"10.1007\/s11263-016-0911-8","volume":"120","author":"A Mahendran","year":"2016","unstructured":"Mahendran, A., Vedaldi, A.: Visualizing deep convolutional neural networks using natural pre-images. Int. J. Comput. Vis. 120(3), 233\u2013255 (2016)","journal-title":"Int. J. Comput. Vis."},{"key":"1_CR22","series-title":"Lecture Notes in Computer Science","doi-asserted-by":"publisher","first-page":"351","DOI":"10.1007\/978-3-030-69538-5_22","volume-title":"Computer Vision \u2013 ACCV 2020","author":"D Marcos","year":"2021","unstructured":"Marcos, D., Fong, R., Lobry, S., Flamary, R., Courty, N., Tuia, D.: Contextual semantic interpretability. In: Ishikawa, H., Liu, C.-L., Pajdla, T., Shi, J. (eds.) ACCV 2020. LNCS, vol. 12625, pp. 351\u2013368. Springer, Cham (2021). https:\/\/doi.org\/10.1007\/978-3-030-69538-5_22"},{"key":"1_CR23","unstructured":"Margeloiu, A., Ashman, M., Bhatt, U., Chen, Y., Jamnik, M., Weller, A.: Do concept bottleneck models learn as intended? In: ICLR Workshop on Responsible AI (2021)"},{"issue":"7540","key":"1_CR24","doi-asserted-by":"publisher","first-page":"529","DOI":"10.1038\/nature14236","volume":"518","author":"V Mnih","year":"2015","unstructured":"Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529\u2013533 (2015)","journal-title":"Nature"},{"issue":"7","key":"1_CR25","doi-asserted-by":"publisher","first-page":"119","DOI":"10.1109\/MC.2021.3074263","volume":"54","author":"H Mueller","year":"2021","unstructured":"Mueller, H., Mayrhofer, M.T., Veen, E.B.V., Holzinger, A.: The ten commandments of ethical medical AI. IEEE Comput. 54(7), 119\u2013123 (2021). https:\/\/doi.org\/10.1109\/MC.2021.3074263","journal-title":"IEEE Comput."},{"key":"1_CR26","doi-asserted-by":"crossref","unstructured":"Nauta, M., van Bree, R., Seifert, C.: Neural prototype trees for interpretable fine-grained image recognition. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 14933\u201314943 (2021)","DOI":"10.1109\/CVPR46437.2021.01469"},{"key":"1_CR27","doi-asserted-by":"crossref","unstructured":"Olah, C., Mordvintsev, A., Schubert, L.: Feature visualization. Distill 2(11), e7 (2017)","DOI":"10.23915\/distill.00007"},{"key":"1_CR28","unstructured":"Petsiuk, V., Das, A., Saenko, K.: Rise: randomized input sampling for explanation of black-box models. In: Proceedings of the BMVC (2018)"},{"key":"1_CR29","doi-asserted-by":"publisher","unstructured":"Pfeifer, B., Secic, A., Saranti, A., Holzinger, A.: GNN-subnet: disease subnetwork detection with explainable graph neural networks. bioRxiv, pp. 1\u20138 (2022). https:\/\/doi.org\/10.1101\/2022.01.12.475995","DOI":"10.1101\/2022.01.12.475995"},{"key":"1_CR30","doi-asserted-by":"crossref","unstructured":"Poppi, S., Cornia, M., Baraldi, L., Cucchiara, R.: Revisiting the evaluation of class activation mapping for explainability: a novel metric and experimental analysis. In: CVPR Workshop on Responsible Computer Vision (2021)","DOI":"10.1109\/CVPRW53098.2021.00260"},{"issue":"3","key":"1_CR31","doi-asserted-by":"publisher","first-page":"247","DOI":"10.1109\/JPROC.2021.3060483","volume":"109","author":"W Samek","year":"2021","unstructured":"Samek, W., Montavon, G., Lapuschkin, S., Anders, C.J., M\u00fcller, K.R.: Explaining deep neural networks and beyond: a review of methods and applications. Proc. IEEE 109(3), 247\u2013278 (2021)","journal-title":"Proc. IEEE"},{"key":"1_CR32","series-title":"Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence)","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-28954-6","volume-title":"Explainable AI: Interpreting, Explaining and Visualizing Deep Learning","year":"2019","unstructured":"Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., M\u00fcller, K.-R. (eds.): Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700. Springer, Cham (2019). https:\/\/doi.org\/10.1007\/978-3-030-28954-6"},{"key":"1_CR33","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: ICCV (2017)","DOI":"10.1109\/ICCV.2017.74"},{"key":"1_CR34","unstructured":"Shitole, V., Li, F., Kahng, M., Tadepalli, P., Fern, A.: One explanation is not enough: structured attention graphs for image classification. In: NeurIPS (2021)"},{"key":"1_CR35","unstructured":"Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. In: ICLR Workshop (2014)"},{"issue":"11","key":"1_CR36","doi-asserted-by":"publisher","first-page":"34","DOI":"10.1145\/3458652","volume":"64","author":"K Stoeger","year":"2021","unstructured":"Stoeger, K., Schneeberger, D., Holzinger, A.: Medical artificial intelligence: the European legal perspective. Commun. ACM 64(11), 34\u201336 (2021). https:\/\/doi.org\/10.1145\/3458652","journal-title":"Commun. ACM"},{"key":"1_CR37","unstructured":"Yang, M., Kim, B.: Benchmarking attribution methods with relative feature importance (2019)"},{"key":"1_CR38","series-title":"Lecture Notes in Computer Science","doi-asserted-by":"publisher","first-page":"818","DOI":"10.1007\/978-3-319-10590-1_53","volume-title":"Computer Vision \u2013 ECCV 2014","author":"MD Zeiler","year":"2014","unstructured":"Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818\u2013833. Springer, Cham (2014). https:\/\/doi.org\/10.1007\/978-3-319-10590-1_53"},{"key":"1_CR39","series-title":"Lecture Notes in Computer Science","doi-asserted-by":"publisher","first-page":"543","DOI":"10.1007\/978-3-319-46493-0_33","volume-title":"Computer Vision \u2013 ECCV 2016","author":"J Zhang","year":"2016","unstructured":"Zhang, J., Lin, Z., Brandt, J., Shen, X., Sclaroff, S.: Top-down neural attention by excitation backprop. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 543\u2013559. Springer, Cham (2016). https:\/\/doi.org\/10.1007\/978-3-319-46493-0_33"},{"key":"1_CR40","unstructured":"Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Object detectors emerge in deep scene CNNs. In: Proceedings of the ICLR (2015)"},{"key":"1_CR41","doi-asserted-by":"crossref","unstructured":"Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: CVPR (2016)","DOI":"10.1109\/CVPR.2016.319"},{"key":"1_CR42","series-title":"Lecture Notes in Computer Science","doi-asserted-by":"publisher","first-page":"122","DOI":"10.1007\/978-3-030-01237-3_8","volume-title":"Computer Vision \u2013 ECCV 2018","author":"B Zhou","year":"2018","unstructured":"Zhou, B., Sun, Y., Bau, D., Torralba, A.: Interpretable basis decomposition for visual explanation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11212, pp. 122\u2013138. Springer, Cham (2018). https:\/\/doi.org\/10.1007\/978-3-030-01237-3_8"}],"container-title":["Lecture Notes in Computer Science","xxAI - Beyond Explainable AI"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-031-04083-2_1","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,2,2]],"date-time":"2023-02-02T08:12:21Z","timestamp":1675325541000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-031-04083-2_1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022]]},"ISBN":["9783031040825","9783031040832"],"references-count":42,"URL":"https:\/\/doi.org\/10.1007\/978-3-031-04083-2_1","relation":{},"ISSN":["0302-9743","1611-3349"],"issn-type":[{"value":"0302-9743","type":"print"},{"value":"1611-3349","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022]]},"assertion":[{"value":"17 April 2022","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"xxAI","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Vienna","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Austria","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2020","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"18 July 2020","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"18 July 2020","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"xxai2020","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/human-centered.ai\/xxai-icml-2020\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}}]}}