{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T04:10:15Z","timestamp":1760242215078,"version":"build-2065373602"},"publisher-location":"Cham","reference-count":49,"publisher":"Springer Nature Switzerland","isbn-type":[{"value":"9783032083265","type":"print"},{"value":"9783032083272","type":"electronic"}],"license":[{"start":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T00:00:00Z","timestamp":1760227200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T00:00:00Z","timestamp":1760227200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2026]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>As machine learning models are increasingly considered for high-stakes domains, effective explanation methods are crucial to ensure that their prediction strategies are transparent to the user. Over the years, numerous metrics have been proposed to assess quality of explanations. However, their practical applicability remains unclear, in particular due to a limited understanding of which specific aspects each metric rewards. In this paper we propose a new framework based on spectral analysis of explanation outcomes to systematically capture the multifaceted properties of different explanation techniques. Our analysis uncovers two distinct factors of explanation quality-<jats:italic>stability<\/jats:italic> and <jats:italic>target sensitivity<\/jats:italic>\u2013that can be directly observed through spectral decomposition. Experiments on both MNIST and ImageNet show that popular evaluation techniques (e.g., pixel-flipping, entropy) partially capture the trade-offs between these factors. Overall, our framework provides a foundational basis for understanding explanation quality, guiding the development of more reliable techniques for evaluating explanations.<\/jats:p>","DOI":"10.1007\/978-3-032-08327-2_14","type":"book-chapter","created":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T18:06:54Z","timestamp":1760206014000},"page":"289-309","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Uncovering the Structure of Explanation Quality with Spectral Analysis"],"prefix":"10.1007","author":[{"ORCID":"https:\/\/orcid.org\/0009-0002-0792-4348","authenticated-orcid":false,"given":"Johannes","family":"Mae\u00df","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7243-6186","authenticated-orcid":false,"given":"Gr\u00e9goire","family":"Montavon","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3970-4569","authenticated-orcid":false,"given":"Shinichi","family":"Nakajima","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3861-7685","authenticated-orcid":false,"given":"Klaus-Robert","family":"M\u00fcller","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0006-3768-0259","authenticated-orcid":false,"given":"Thomas","family":"Schnake","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,10,12]]},"reference":[{"key":"14_CR1","unstructured":"Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 9525\u20139536. NIPS\u201918, Curran Associates Inc., Red Hook, NY, USA (2018)"},{"key":"14_CR2","doi-asserted-by":"crossref","unstructured":"Ancona, M., Ceolini, E., \u00d6ztireli, C., Gross, M.: Towards better understanding of gradient-based attribution methods for deep neural networks. In: ICLR (Poster). OpenReview.net (2018)","DOI":"10.1007\/978-3-030-28954-6_9"},{"key":"14_CR3","doi-asserted-by":"crossref","unstructured":"Ancona, M., Ceolini, E., \u00d6ztireli, C., Gross, M.H.: Gradient-based attribution methods. In: Explainable AI, Lecture Notes in Computer Science, vol. 11700, pp. 169\u2013191. Springer (2019)","DOI":"10.1007\/978-3-030-28954-6_9"},{"key":"14_CR4","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1016\/j.inffus.2019.12.012","volume":"58","author":"AB Arrieta","year":"2020","unstructured":"Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82\u2013115 (2020)","journal-title":"Inf. Fusion"},{"issue":"7","key":"14_CR5","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0130140","volume":"10","author":"S Bach","year":"2015","unstructured":"Bach, S., Binder, A., Montavon, G., Klauschen, F., M\u00fcller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)","journal-title":"PLoS ONE"},{"key":"14_CR6","first-page":"1803","volume":"11","author":"D Baehrens","year":"2010","unstructured":"Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., M\u00fcller, K.R.: How to explain individual classification decisions. J. Mach. Learn. Res. 11, 1803\u20131831 (2010)","journal-title":"J. Mach. Learn. Res."},{"key":"14_CR7","unstructured":"Balduzzi, D., et al.: The shattered gradients problem: if resnets are the answer, then what is the question? In: International Conference on Machine Learning, pp. 342\u2013350. PMLR (2017)"},{"key":"14_CR8","doi-asserted-by":"crossref","unstructured":"Binder, A., Weber, L., Lapuschkin, S., Montavon, G., M\u00fcller, K.R., Samek, W.: Shortcomings of top-down randomization-based sanity checks for evaluations of deep neural network explanations. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 16143\u201316152 (2023)","DOI":"10.1109\/CVPR52729.2023.01549"},{"key":"14_CR9","unstructured":"Bluecher, S., Vielhaben, J., Strodthoff, N.: Decoupling pixel flipping and occlusion strategy for consistent XAI benchmarks. In: Transactions on Machine Learning Research (2024)"},{"key":"14_CR10","unstructured":"Bradski, G.: The OpenCV Library. Dr. Dobb\u2019s Journal of Software Tools (2000)"},{"key":"14_CR11","doi-asserted-by":"crossref","unstructured":"Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., Elhadad, N.: Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: KDD, pp. 1721\u20131730. ACM (2015)","DOI":"10.1145\/2783258.2788613"},{"issue":"11","key":"14_CR12","doi-asserted-by":"publisher","first-page":"7283","DOI":"10.1109\/TPAMI.2024.3388275","volume":"46","author":"P Chormai","year":"2024","unstructured":"Chormai, P., Herrmann, J., M\u00fcller, K.R., Montavon, G.: Disentangled explanations of neural network predictions by finding relevant subspaces. IEEE Trans. Pattern Anal. Mach. Intell. 46(11), 7283\u20137299 (2024)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"14_CR13","unstructured":"Dabkowski, P., Gal, Y.: Real time image saliency for black box classifiers. In: NIPS, pp. 6967\u20136976 (2017)"},{"key":"14_CR14","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248\u2013255 (2009)","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"14_CR15","unstructured":"Dombrowski, A.K., Alber, M., Anders, C., Ackermann, M., M\u00fcller, K.R., Kessel, P.: Explanations can be manipulated and geometry is to blame. Adv. Neural Info. Process. Syst. 32 (2019)"},{"key":"14_CR16","unstructured":"Doshi-Velez, F., Kim, B.: A roadmap for a rigorous science of interpretability. CoRR abs\/1702.08608 (2017)"},{"issue":"34","key":"14_CR17","first-page":"1","volume":"24","author":"A Hedstr\u00f6m","year":"2023","unstructured":"Hedstr\u00f6m, A., et al.: Quantus: an explainable AI toolkit for responsible evaluation of neural network explanations and beyond. J. Mach. Learn. Res. 24(34), 1\u201311 (2023)","journal-title":"J. Mach. Learn. Res."},{"key":"14_CR18","unstructured":"Hense, J., et al.: XMIL: insightful explanations for multiple instance learning in histopathology (2025). https:\/\/arxiv.org\/abs\/2406.04280"},{"key":"14_CR19","unstructured":"Kaggle: Digit recognizer dataset (2017). https:\/\/www.kaggle.com\/competitions\/digit-recognizer\/data. Accessed01 April 2025"},{"issue":"1","key":"14_CR20","doi-asserted-by":"publisher","first-page":"541","DOI":"10.1146\/annurev-pathmechdis-051222-113147","volume":"19","author":"F Klauschen","year":"2024","unstructured":"Klauschen, F., et al.: Toward explainable artificial intelligence for precision pathology. Annu. Rev. Pathol. 19(1), 541\u2013570 (2024)","journal-title":"Annu. Rev. Pathol."},{"key":"14_CR21","unstructured":"Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: ICML. Proceedings of Machine Learning Research, vol.\u00a070, pp. 1885\u20131894. PMLR (2017)"},{"issue":"4","key":"14_CR22","doi-asserted-by":"publisher","first-page":"255","DOI":"10.6028\/jres.045.026","volume":"45","author":"C Lanczos","year":"1950","unstructured":"Lanczos, C.: An iteration method for the solution of the eigenvalue problem of linear differential and integral operators. J. Res. Natl. Bur. Stand. 45(4), 255\u2013282 (1950)","journal-title":"J. Res. Natl. Bur. Stand."},{"issue":"1","key":"14_CR23","doi-asserted-by":"publisher","first-page":"1096","DOI":"10.1038\/s41467-019-08987-4","volume":"10","author":"S Lapuschkin","year":"2019","unstructured":"Lapuschkin, S., W\u00e4ldchen, S., Binder, A., Montavon, G., Samek, W., M\u00fcller, K.R.: Unmasking clever HANs predictors and assessing what machines really learn. Nat. Commun. 10(1), 1096 (2019)","journal-title":"Nat. Commun."},{"key":"14_CR24","unstructured":"LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010). http:\/\/yann.lecun.com\/exdb\/mnist\/"},{"key":"14_CR25","doi-asserted-by":"crossref","unstructured":"Letzgus, S., M\u00fcller, K.R.: An explainable AI framework for robust and transparent data-driven wind turbine power curve models (2024)","DOI":"10.1016\/j.egyai.2023.100328"},{"key":"14_CR26","doi-asserted-by":"crossref","unstructured":"Marcel, S., Rodriguez, Y.: Torchvision the machine-vision package of torch (2010). https:\/\/pypi.org\/project\/torchvision\/","DOI":"10.1145\/1873951.1874254"},{"key":"14_CR27","doi-asserted-by":"crossref","unstructured":"Montavon, G.: Gradient-based vs. propagation-based explanations: An axiomatic comparison. In: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp. 253\u2013265 (2019)","DOI":"10.1007\/978-3-030-28954-6_13"},{"key":"14_CR28","doi-asserted-by":"crossref","unstructured":"Montavon, G., Binder, A., Lapuschkin, S., Samek, W., M\u00fcller, K.R.: Layer-wise relevance propagation: an overview. In: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp. 193\u2013209 (2019)","DOI":"10.1007\/978-3-030-28954-6_10"},{"key":"14_CR29","doi-asserted-by":"crossref","unstructured":"Nauta, M., et al.: From anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable AI. ACM Comput. Surv. 55(13s) (Jul 2023)","DOI":"10.1145\/3583558"},{"key":"14_CR30","doi-asserted-by":"crossref","unstructured":"Pahde, F., Yolcu, G.\u00dc., Binder, A., Samek, W., Lapuschkin, S.: Optimizing explanations by network canonization and hyperparameter search. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 3818\u20133827 (2023)","DOI":"10.1109\/CVPRW59228.2023.00396"},{"key":"14_CR31","unstructured":"Petsiuk, V., Das, A., Saenko, K.: RISE: randomized input sampling for explanation of black-box models. In: BMVC, p.\u00a0151. BMVA Press (2018)"},{"issue":"11","key":"14_CR32","doi-asserted-by":"publisher","first-page":"2660","DOI":"10.1109\/TNNLS.2016.2599820","volume":"28","author":"W Samek","year":"2016","unstructured":"Samek, W., Binder, A., Montavon, G., Lapuschkin, S., M\u00fcller, K.R.: Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learn. Syst. 28(11), 2660\u20132673 (2016)","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"issue":"3","key":"14_CR33","doi-asserted-by":"publisher","first-page":"247","DOI":"10.1109\/JPROC.2021.3060483","volume":"109","author":"W Samek","year":"2021","unstructured":"Samek, W., Montavon, G., Lapuschkin, S., Anders, C.J., M\u00fcller, K.R.: Explaining deep neural networks and beyond: a review of methods and applications. Proc. IEEE 109(3), 247\u2013278 (2021)","journal-title":"Proc. IEEE"},{"issue":"11","key":"14_CR34","doi-asserted-by":"publisher","first-page":"7581","DOI":"10.1109\/TPAMI.2021.3115452","volume":"44","author":"T Schnake","year":"2022","unstructured":"Schnake, T., et al.: Higher-order explanations of graph neural networks via relevant walks. IEEE Trans. Pattern Anal. Mach. Intell. 44(11), 7581\u20137596 (2022)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"14_CR35","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2024.102923","volume":"118","author":"T Schnake","year":"2025","unstructured":"Schnake, T., et al.: Towards symbolic XAI \u2013 explanation through human understandable logical relationships between features. Info. Fusion 118, 102923 (2025)","journal-title":"Info. Fusion"},{"issue":"2","key":"14_CR36","doi-asserted-by":"publisher","first-page":"336","DOI":"10.1007\/s11263-019-01228-7","volume":"128","author":"RR Selvaraju","year":"2020","unstructured":"Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-Cam: visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vis. 128(2), 336\u2013359 (2020)","journal-title":"Int. J. Comput. Vis."},{"key":"14_CR37","doi-asserted-by":"crossref","unstructured":"Shapley, L.S.: A Value for n-Person Games, pp. 307\u2013318. Princeton University Press (1953)","DOI":"10.1515\/9781400881970-018"},{"key":"14_CR38","unstructured":"Sixt, L., Granz, M., Landgraf, T.: When explanations lie: why many modified BP attributions fail. In: International Conference on Machine Learning, pp. 9046\u20139057. PMLR (2020)"},{"key":"14_CR39","unstructured":"Smilkov, D., Thorat, N., Kim, B., Vi\u00e9gas, F.B., Wattenberg, M.: Smoothgrad: removing noise by adding noise. CoRR abs\/1706.03825 (2017)"},{"key":"14_CR40","first-page":"1","volume":"11","author":"E \u0160trumbelj","year":"2010","unstructured":"\u0160trumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1\u201318 (2010)","journal-title":"J. Mach. Learn. Res."},{"issue":"3","key":"14_CR41","doi-asserted-by":"publisher","first-page":"647","DOI":"10.1007\/s10115-013-0679-x","volume":"41","author":"E Strumbelj","year":"2014","unstructured":"Strumbelj, E., Kononenko, I.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41(3), 647\u2013665 (2014)","journal-title":"Knowl. Inf. Syst."},{"key":"14_CR42","unstructured":"Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319\u20133328. PMLR (2017)"},{"key":"14_CR43","doi-asserted-by":"crossref","unstructured":"Swartout, W.R., Moore, J.D.: Explanation in Second Generation Expert Systems, pp. 543\u2013585. Springer Berlin Heidelberg (1993)","DOI":"10.1007\/978-3-642-77927-5_24"},{"issue":"1","key":"14_CR44","doi-asserted-by":"publisher","first-page":"23","DOI":"10.1080\/10867651.2004.10487596","volume":"9","author":"A Telea","year":"2004","unstructured":"Telea, A.: An image inpainting technique based on the fast marching method. J. Graph .Tools 9(1), 23\u201334 (2004)","journal-title":"J. Graph .Tools"},{"key":"14_CR45","doi-asserted-by":"crossref","unstructured":"Tseng, A., Shrikumar, A., Kundaje, A.: Fourier-transform-based attribution priors improve the interpretability and stability of deep learning models for genomics. In: Advances in Neural Information Processing Systems, vol.\u00a033, pp. 1913\u20131923. Curran Associates, Inc. (2020)","DOI":"10.1101\/2020.06.11.147272"},{"key":"14_CR46","unstructured":"Xiong, P., Schnake, T., Gastegger, M., Montavon, G., M\u00fcller, K.R., Nakajima, S.: Relevant walk search for explaining graph neural networks. In: Proceedings of the 40th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol.\u00a0202, pp. 38301\u201338324. PMLR (2023)"},{"key":"14_CR47","unstructured":"Xiong, P., Schnake, T., Montavon, G., M\u00fcller, K.R., Nakajima, S.: Efficient computation of higher-order subgraph attribution via message passing. In: Proceedings of the 39th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol.\u00a0162, pp. 24478\u201324495. PMLR (2022)"},{"key":"14_CR48","unstructured":"Yu, H., Varshney, L.R.: Towards deep interpretability (MUS-ROVER II): learning hierarchical representations of tonal music. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net (2017)"},{"issue":"10","key":"14_CR49","doi-asserted-by":"publisher","first-page":"1084","DOI":"10.1007\/s11263-017-1059-x","volume":"126","author":"J Zhang","year":"2018","unstructured":"Zhang, J., Bargal, S.A., Lin, Z., Brandt, J., Shen, X., Sclaroff, S.: Top-down neural attention by excitation backprop. Int. J. Comput. Vis. 126(10), 1084\u20131102 (2018)","journal-title":"Int. J. Comput. Vis."}],"container-title":["Communications in Computer and Information Science","Explainable Artificial Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-032-08327-2_14","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T18:07:02Z","timestamp":1760206022000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-032-08327-2_14"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,12]]},"ISBN":["9783032083265","9783032083272"],"references-count":49,"URL":"https:\/\/doi.org\/10.1007\/978-3-032-08327-2_14","relation":{},"ISSN":["1865-0929","1865-0937"],"issn-type":[{"value":"1865-0929","type":"print"},{"value":"1865-0937","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,12]]},"assertion":[{"value":"12 October 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"xAI","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"World Conference on Explainable Artificial Intelligence","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Istanbul","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"T\u00fcrkiye","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2025","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"9 July 2025","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"11 July 2025","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"3","order":9,"name":"conference_number","label":"Conference Number","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"xai2025","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/xaiworldconference.com\/2025\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}}]}}