{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,26]],"date-time":"2026-02-26T15:25:13Z","timestamp":1772119513619,"version":"3.50.1"},"reference-count":103,"publisher":"Springer Science and Business Media LLC","issue":"8","license":[{"start":{"date-parts":[[2025,5,14]],"date-time":"2025-05-14T00:00:00Z","timestamp":1747180800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,5,14]],"date-time":"2025-05-14T00:00:00Z","timestamp":1747180800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Artif Intell Rev"],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>\n                    Healthcare providers, policymakers, and defense contractors need to understand many types of machine learning model behaviors. While eXplainable Artificial Intelligence (XAI) provides tools for interpreting these behaviors, few frameworks, surveys, and taxonomies produce succinct yet general notation to help researchers and practitioners describe their explainability needs and quantify whether these needs are met. Such quantified comparisons could help individuals rank XAI methods by their relevance to use-cases, select explanations best suited for individual users, and evaluate what explanations are most useful for describing model behaviors. This paper collects, decomposes, and abstracts subcomponents of common XAI methods to identify a\n                    <jats:italic>mathematically grounded<\/jats:italic>\n                    syntax that\n                    <jats:italic>applies generally<\/jats:italic>\n                    to describing\n                    <jats:italic>modern and future<\/jats:italic>\n                    explanation types while remaining\n                    <jats:italic>useful for discovering novel XAI methods<\/jats:italic>\n                    . The resulting syntax, introduced as the\n                    <jats:italic>Qi<\/jats:italic>\n                    -Framework, generally defines explanation types in terms of the information being explained, their utility to inspectors, and the methods and information used to produce explanations. Just as programming languages define syntax to structure, simplify, and standardize software development, so too the\n                    <jats:italic>Qi<\/jats:italic>\n                    -Framework acts as a common language to help researchers and practitioners select, compare, and discover XAI methods. Derivative works may extend and implement the\n                    <jats:italic>Qi<\/jats:italic>\n                    -Framework to develop a more rigorous science for interpretable machine learning and inspire collaborative competition across XAI research.\n                  <\/jats:p>","DOI":"10.1007\/s10462-025-11216-8","type":"journal-article","created":{"date-parts":[[2025,5,14]],"date-time":"2025-05-14T02:45:37Z","timestamp":1747190737000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Abstracting general syntax for XAI after decomposing explanation sub-components"],"prefix":"10.1007","volume":"58","author":[{"given":"Stephen","family":"Wormald","sequence":"first","affiliation":[]},{"given":"Matheus Kunzler","family":"Maldaner","sequence":"additional","affiliation":[]},{"given":"Kristian D.","family":"O\u2019Connor","sequence":"additional","affiliation":[]},{"given":"Olivia P.","family":"Dizon-Paradis","sequence":"additional","affiliation":[]},{"given":"Damon L.","family":"Woodard","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,5,14]]},"reference":[{"key":"11216_CR1","doi-asserted-by":"publisher","first-page":"52138","DOI":"10.1109\/ACCESS.2018.2870052","volume":"6","author":"A Adadi","year":"2018","unstructured":"Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE Access 6:52138\u201352160","journal-title":"IEEE Access"},{"issue":"10","key":"11216_CR2","doi-asserted-by":"publisher","first-page":"1340","DOI":"10.1093\/bioinformatics\/btq134","volume":"26","author":"A Altmann","year":"2010","unstructured":"Altmann A, Tolo\u015fi L, Sander O et al (2010) Permutation importance: a corrected feature importance measure. Bioinformatics 26(10):1340\u20131347","journal-title":"Bioinformatics"},{"key":"11216_CR3","doi-asserted-by":"publisher","first-page":"14","DOI":"10.1016\/j.inffus.2021.11.008","volume":"81","author":"L Arras","year":"2022","unstructured":"Arras L, Osman A, Samek W (2022) Clevr-xai: a benchmark dataset for the ground truth evaluation of neural network explanations. Inform Fusion 81:14\u201340","journal-title":"Inform Fusion"},{"key":"11216_CR4","unstructured":"Arya V, Bellamy RK, Chen PY, et\u00a0al (2019) One explanation does not fit all: a toolkit and taxonomy of ai explainability techniques. Preprint at arXiv:1909.03012"},{"issue":"7","key":"11216_CR5","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0130140","volume":"10","author":"S Bach","year":"2015","unstructured":"Bach S, Binder A, Montavon G et al (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7):e0130140","journal-title":"PLoS ONE"},{"key":"11216_CR6","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2021.103649","volume":"303","author":"S Badreddine","year":"2022","unstructured":"Badreddine S, Garcez A, Serafini L et al (2022) Logic tensor networks. Artif Intell 303:103649","journal-title":"Artif Intell"},{"key":"11216_CR7","doi-asserted-by":"crossref","unstructured":"Barnett AJ, Schwartz FR, Tao C, et\u00a0al (2021) Iaia-bl: a case-based interpretable deep learning model for classification of mass lesions in digital mammography. Preprint at arXiv:2103.12308","DOI":"10.1038\/s42256-021-00423-x"},{"key":"11216_CR8","doi-asserted-by":"publisher","DOI":"10.3389\/fdata.2021.688969","volume":"39","author":"V Belle","year":"2021","unstructured":"Belle V, Papantonis I (2021) Principles and practice of explainable machine learning. Front Big Data 39:688969","journal-title":"Front Big Data"},{"issue":"3","key":"11216_CR9","volume":"3","author":"PL Bommer","year":"2024","unstructured":"Bommer PL, Kretschmer M, Hedstr\u00f6m A et al (2024) Finding the right xai method-a guide for the evaluation and ranking of explainable ai methods in climate science. Artif Intell Earth Syst 3(3):e230074","journal-title":"Artif Intell Earth Syst"},{"key":"11216_CR10","doi-asserted-by":"publisher","first-page":"245","DOI":"10.1613\/jair.1.12228","volume":"70","author":"N Burkart","year":"2021","unstructured":"Burkart N, Huber MF (2021) A survey on the explainability of supervised machine learning. J Artif Intell Res 70:245\u2013317","journal-title":"J Artif Intell Res"},{"key":"11216_CR11","unstructured":"Carmichael Z (2024) Explainable ai for high-stakes decision-making. PhD thesis, University of Notre Dame"},{"key":"11216_CR12","doi-asserted-by":"crossref","unstructured":"Chattopadhay A, Sarkar A, Howlader P, et\u00a0al (2018) Grad-cam++: generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), IEEE, pp 839\u2013847","DOI":"10.1109\/WACV.2018.00097"},{"key":"11216_CR13","first-page":"13050","volume":"35","author":"SL Chau","year":"2022","unstructured":"Chau SL, Hu R, Gonzalez J et al (2022) Rkhs-shap: shapley values for kernel methods. Adv Neural Inf Process Syst 35:13050\u201313063","journal-title":"Adv Neural Inf Process Syst"},{"key":"11216_CR14","unstructured":"Chen J, Song L, Wainwright M, et\u00a0al (2018) Learning to explain: an information-theoretic perspective on model interpretation. In: International Conference on Machine Learning, PMLR, pp 883\u2013892"},{"key":"11216_CR15","unstructured":"Chen C, Li O, Tao D, et\u00a0al (2019) This looks like that: deep learning for interpretable image recognition. Adv Neural Inform Process Syst, 32"},{"issue":"12","key":"11216_CR16","doi-asserted-by":"publisher","first-page":"772","DOI":"10.1038\/s42256-020-00265-z","volume":"2","author":"Z Chen","year":"2020","unstructured":"Chen Z, Bei Y, Rudin C (2020) Concept whitening for interpretable image recognition. Nat Mach Intell 2(12):772\u2013782","journal-title":"Nat Mach Intell"},{"key":"11216_CR17","doi-asserted-by":"crossref","unstructured":"Chen L, Qiu Y, Zhao J et al (2021) Cpkd: Concepts-prober-guided knowledge distillation for fine-grained cnn explanation. 2021 2nd International Conference on Electronics. IEEE, Communications and Information Technology (CECIT), pp 421\u2013426","DOI":"10.1109\/CECIT53797.2021.00081"},{"key":"11216_CR18","unstructured":"Chromik M, Schuessler M (2020) A taxonomy for human subject evaluation of black-box explanations in xai. Exss-atec@ iui 1"},{"key":"11216_CR19","doi-asserted-by":"crossref","unstructured":"Cohen G, Afshar S, Tapson J, et\u00a0al (2017) Emnist: Extending mnist to handwritten letters. In: 2017 International Joint Conference on neural networks (IJCNN), IEEE, pp 2921\u20132926","DOI":"10.1109\/IJCNN.2017.7966217"},{"key":"11216_CR20","first-page":"17212","volume":"33","author":"I Covert","year":"2020","unstructured":"Covert I, Lundberg SM, Lee SI (2020) Understanding global feature contributions with additive importance measures. Adv Neural Inf Process Syst 33:17212\u201317223","journal-title":"Adv Neural Inf Process Syst"},{"key":"11216_CR21","doi-asserted-by":"crossref","unstructured":"Das S, Agarwal N, Venugopal D, et\u00a0al (2020) Taxonomy and survey of interpretable machine learning method. In: 2020 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, pp 670\u2013677","DOI":"10.1109\/SSCI47803.2020.9308404"},{"key":"11216_CR22","doi-asserted-by":"crossref","unstructured":"Datta A, Sen S, Zick Y (2016) Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In: 2016 IEEE symposium on security and privacy (SP), IEEE, pp 598\u2013617","DOI":"10.1109\/SP.2016.42"},{"key":"11216_CR23","doi-asserted-by":"crossref","unstructured":"Deng J, Dong W, Socher R, et\u00a0al (2009) Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, Ieee, pp 248\u2013255","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"11216_CR24","doi-asserted-by":"publisher","unstructured":"Dijk O, oegesam, Bell R, et\u00a0al (2023) oegedijk\/explainerdashboard: v0.4.5: drop numpy[CDATA[<]]$$<$$1.25 restriction. Zenodo, https:\/\/doi.org\/10.5281\/ZENODO.6407091","DOI":"10.5281\/ZENODO.6407091"},{"key":"11216_CR25","unstructured":"Dong H, Mao J, Lin T, et\u00a0al (2019) Neural logic machines. Preprint at arXiv:1904.11694"},{"key":"11216_CR26","unstructured":"Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. Peprint at arXiv:1702.08608"},{"issue":"9","key":"11216_CR27","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3561048","volume":"55","author":"R Dwivedi","year":"2023","unstructured":"Dwivedi R, Dave D, Naik H et al (2023) Explainable ai (xai): core ideas, techniques, and solutions. ACM Comput Surv 55(9):1\u201333","journal-title":"ACM Comput Surv"},{"key":"11216_CR28","unstructured":"Emamirad E, Omran PG, Haller A, et\u00a0al (2023) A system\u2019s approach taxonomy for user-centred xai: a survey. Preprint at arXiv:2303.02810"},{"key":"11216_CR29","doi-asserted-by":"crossref","unstructured":"Fu R, Hu Q, Dong X, et\u00a0al (2020) Axiom-based grad-cam: towards accurate visualization and explanation of cnns. Preprint at arXiv:2008.02312","DOI":"10.5244\/C.34.146"},{"key":"11216_CR30","doi-asserted-by":"crossref","unstructured":"Ghojogh B, Ghodsi A, Karray F, et\u00a0al (2022) Spectral, probabilistic, and deep metric learning: tutorial and survey. Preprint at arXiv:2201.09267","DOI":"10.1007\/978-3-031-10602-6_11"},{"key":"11216_CR31","unstructured":"Ghorbani A, Zou J (2019) Data shapley: Equitable valuation of data for machine learning. In: International Conference on Machine Learning, PMLR, pp 2242\u20132251"},{"key":"11216_CR32","unstructured":"Ghorbani A, Wexler J, Zou JY, et\u00a0al (2019) Towards automatic concept-based explanations. Adv Neural Inform Process Syst. 32"},{"key":"11216_CR33","doi-asserted-by":"crossref","unstructured":"Hanif A, Zhang X, Wood S (2021) A survey on explainable artificial intelligence techniques and challenges. In: 2021 IEEE 25th International Enterprise Distributed Object Computing Workshop (EDOCW), IEEE, pp 81\u201389","DOI":"10.1109\/EDOCW52865.2021.00036"},{"issue":"10","key":"11216_CR34","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3563691","volume":"55","author":"R Ibrahim","year":"2023","unstructured":"Ibrahim R, Shafiq MO (2023) Explainable convolutional neural networks: a taxonomy, review, and future directions. ACM Comput Surv 55(10):1\u201337","journal-title":"ACM Comput Surv"},{"key":"11216_CR35","doi-asserted-by":"crossref","unstructured":"Jacovi A, Swayamdipta S, Ravfogel S, et\u00a0al (2021) Contrastive explanations for model interpretability. Preprint at arXiv:2103.01378","DOI":"10.18653\/v1\/2021.emnlp-main.120"},{"key":"11216_CR36","doi-asserted-by":"publisher","first-page":"5875","DOI":"10.1109\/TIP.2021.3089943","volume":"30","author":"PT Jiang","year":"2021","unstructured":"Jiang PT, Zhang CB, Hou Q et al (2021) Layercam: exploring hierarchical class activation maps for localization. IEEE Trans Image Process 30:5875\u20135888","journal-title":"IEEE Trans Image Process"},{"key":"11216_CR37","doi-asserted-by":"crossref","unstructured":"Kapishnikov A, Bolukbasi T, Vi\u00e9gas F, et\u00a0al (2019) Xrai: better attributions through regions. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp 4948\u20134957","DOI":"10.1109\/ICCV.2019.00505"},{"key":"11216_CR38","doi-asserted-by":"crossref","unstructured":"Kapishnikov A, Venugopalan S, Avci B, et\u00a0al (2021) Guided integrated gradients: an adaptive path method for removing noise. in 2021 ieee. In: CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 5048\u20135056","DOI":"10.1109\/CVPR46437.2021.00501"},{"key":"11216_CR39","doi-asserted-by":"publisher","first-page":"5455","DOI":"10.1007\/s10462-020-09825-6","volume":"53","author":"A Khan","year":"2020","unstructured":"Khan A, Sohail A, Zahoora U et al (2020) A survey of the recent architectures of deep convolutional neural networks. Artif Intell Rev 53:5455\u20135516","journal-title":"Artif Intell Rev"},{"key":"11216_CR40","unstructured":"Kim B, Wattenberg M, Gilmer J, et\u00a0al (2018) Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In: International Conference on Machine Learning, PMLR, pp 2668\u20132677"},{"key":"11216_CR41","doi-asserted-by":"crossref","unstructured":"Kim E, Kim S, Seo M, et\u00a0al (2021) Xprotonet: diagnosis in chest radiography with global and local explanations. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 15719\u201315728","DOI":"10.1109\/CVPR46437.2021.01546"},{"issue":"4","key":"11216_CR42","doi-asserted-by":"publisher","first-page":"307","DOI":"10.1561\/2200000056","volume":"12","author":"DP Kingma","year":"2019","unstructured":"Kingma DP, Welling M et al (2019) An introduction to variational autoencoders. Found Trends \u00ae Mach Learn 12(4):307\u2013392","journal-title":"Found Trends \u00ae Mach Learn"},{"key":"11216_CR43","unstructured":"Koh PW, Nguyen T, Tang YS, et\u00a0al (2020) Concept bottleneck models. In: International Conference on Machine Learning, PMLR, pp 5338\u20135348"},{"key":"11216_CR44","doi-asserted-by":"crossref","unstructured":"Li O, Liu H, Chen C, et\u00a0al (2018) Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: Proceedings of the AAAI Conference on Artificial Intelligence","DOI":"10.1609\/aaai.v32i1.11771"},{"key":"11216_CR45","doi-asserted-by":"crossref","unstructured":"Li D, Liu Y, Huang J, et\u00a0al (2023) A trustworthy view on xai method evaluation. Authorea Preprints","DOI":"10.36227\/techrxiv.21067438"},{"key":"11216_CR46","doi-asserted-by":"crossref","unstructured":"Li Z, Fan S, Gu Y, et\u00a0al (2024) Flexkbqa: a flexible llm-powered framework for few-shot knowledge base question answering. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp 18608\u201318616","DOI":"10.1609\/aaai.v38i17.29823"},{"key":"11216_CR47","doi-asserted-by":"publisher","DOI":"10.1016\/j.cmpb.2022.107161","volume":"226","author":"HW Loh","year":"2022","unstructured":"Loh HW, Ooi CP, Seoni S et al (2022) Application of explainable artificial intelligence for healthcare: a systematic review of the last decade (2011\u20132022). Comput Methods Programs Biomed 226:107161","journal-title":"Comput Methods Programs Biomed"},{"key":"11216_CR48","doi-asserted-by":"publisher","first-page":"3136","DOI":"10.1007\/s11263-021-01498-0","volume":"129","author":"M Losch","year":"2021","unstructured":"Losch M, Fritz M, Schiele B (2021) Semantic bottlenecks: quantifying and improving inspectability of deep representations. Int J Comput Vision 129:3136\u20133153","journal-title":"Int J Comput Vision"},{"key":"11216_CR49","unstructured":"Lundberg SM, Lee SI (2017) A unified approach to interpreting model predictions. Adv Neural Inform Process Systems. 30"},{"key":"11216_CR50","unstructured":"Lundberg SM, Erion GG, Lee SI (2018) Consistent individualized feature attribution for tree ensembles. Preprint at arXiv:1802.03888"},{"key":"11216_CR51","unstructured":"Madotto A, Lin Z, Winata GI, et\u00a0al (2021) Few-shot bot: Prompt-based learning for dialogue systems. Preprint at arXiv:2110.08118"},{"key":"11216_CR52","doi-asserted-by":"crossref","unstructured":"Manca G, Bhattacharya N, Maczey S, et\u00a0al (2023) Xaiprocesslens: a counterfactual-based dashboard for explainable ai in process industries. In: HHAI, pp 401\u2013403","DOI":"10.3233\/FAIA230110"},{"key":"11216_CR53","unstructured":"Mao J, Gan C, Kohli P, et\u00a0al (2019) The neuro-symbolic concept learner: interpreting scenes, words, and sentences from natural supervision. Preprint at arXiv:1904.12584"},{"key":"11216_CR54","doi-asserted-by":"crossref","unstructured":"Martins T, De\u00a0Almeida AM, Cardoso E, et\u00a0al (2023) Explainable artificial intelligence (xai): a systematic literature review on taxonomies and applications in finance. IEEE Access","DOI":"10.1109\/ACCESS.2023.3347028"},{"key":"11216_CR55","doi-asserted-by":"crossref","unstructured":"Minh D, Wang HX, Li YF, et\u00a0al (2022) Explainable artificial intelligence: a comprehensive review. Artif Intell Rev. pp 1\u201366","DOI":"10.1007\/s10462-021-10088-y"},{"key":"11216_CR56","doi-asserted-by":"crossref","unstructured":"Mir\u00f3-Nicolau M, Jaume-i Cap\u00f3 A, Moy\u00e0-Alcover G (2024) Assessing fidelity in xai post-hoc techniques: a comparative study with ground truth explanations datasets. Artif Intell. 104179","DOI":"10.1016\/j.artint.2024.104179"},{"key":"11216_CR57","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1016\/j.patcog.2016.11.008","volume":"65","author":"G Montavon","year":"2017","unstructured":"Montavon G, Lapuschkin S, Binder A et al (2017) Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recogn 65:211\u2013222","journal-title":"Pattern Recogn"},{"issue":"2","key":"11216_CR58","doi-asserted-by":"publisher","first-page":"161","DOI":"10.1080\/00401706.1991.10484804","volume":"33","author":"MD Morris","year":"1991","unstructured":"Morris MD (1991) Factorial sampling plans for preliminary computational experiments. Technometrics 33(2):161\u2013174","journal-title":"Technometrics"},{"key":"11216_CR59","doi-asserted-by":"crossref","unstructured":"Muhammad MB, Yeasin M (2020) Eigen-cam: Class activation map using principal components. In: 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, pp 1\u20137","DOI":"10.1109\/IJCNN48605.2020.9206626"},{"key":"11216_CR60","doi-asserted-by":"crossref","unstructured":"Nauta M, Van\u00a0Bree R, Seifert C (2021) Neural prototype trees for interpretable fine-grained image recognition. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 14933\u201314943","DOI":"10.1109\/CVPR46437.2021.01469"},{"key":"11216_CR61","unstructured":"Nettles AT (2004) Allowables for structural composites. In: International Conference on Composites Engineering, NASA Marshall Space Flight Center, Hilton Head, SC, United States, https:\/\/ntrs.nasa.gov\/citations\/20040111395"},{"key":"11216_CR62","unstructured":"Nomm S (2023) Towards the linear algebra based taxonomy of xai explanations. Preprint at arXiv:2301.13138"},{"key":"11216_CR63","doi-asserted-by":"publisher","first-page":"1","DOI":"10.3389\/fnana.2013.00001","volume":"7","author":"R Perin","year":"2013","unstructured":"Perin R, Telefont M, Markram H (2013) Computing the size and number of neuronal clusters in local circuits. Front Neuroanat 7:1","journal-title":"Front Neuroanat"},{"issue":"1","key":"11216_CR64","doi-asserted-by":"publisher","first-page":"46","DOI":"10.1109\/TTS.2023.3239921","volume":"4","author":"D Petkovic","year":"2023","unstructured":"Petkovic D (2023) It is not \u201caccuracy vs. explainability\u2019\u2019-we need both for trustworthy ai systems. IEEE Trans Technol Soc 4(1):46\u201353","journal-title":"IEEE Trans Technol Soc"},{"key":"11216_CR65","unstructured":"Petsiuk V, Das A, Saenko K (2018) Rise: Randomized input sampling for explanation of black-box models. Preprint at arXiv:1806.07421"},{"key":"11216_CR66","doi-asserted-by":"crossref","unstructured":"Petsiuk V, Jain R, Manjunatha V, et\u00a0al (2021) Black-box explanation of object detectors via saliency maps. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 11443\u201311452","DOI":"10.1109\/CVPR46437.2021.01128"},{"key":"11216_CR67","unstructured":"Plumb G, Molitor D, Talwalkar AS (2018) Model agnostic supervised local explanations. Adv Neural Inform Process Syst. 31"},{"key":"11216_CR68","doi-asserted-by":"crossref","unstructured":"Ribeiro MT, Singh S, Guestrin C (2016) \" Why should i trust you?\" Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135\u20131144","DOI":"10.1145\/2939672.2939778"},{"key":"11216_CR69","doi-asserted-by":"crossref","unstructured":"Ribeiro MT, Singh S, Guestrin C (2018) Anchors: high-precision model-agnostic explanations. In: Proceedings of the AAAI Conference on Artificial Intelligence","DOI":"10.1609\/aaai.v32i1.11491"},{"key":"11216_CR70","unstructured":"Riegel R, Gray A, Luus F, et\u00a0al (2020) Logical neural networks. Preprint at arXiv:2006.13155"},{"key":"11216_CR71","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","volume":"115","author":"O Russakovsky","year":"2015","unstructured":"Russakovsky O, Deng J, Su H et al (2015) Imagenet large scale visual recognition challenge. Int J Comput Vision 115:211\u2013252","journal-title":"Int J Comput Vision"},{"key":"11216_CR72","doi-asserted-by":"crossref","unstructured":"Rymarczyk D, Struski \u0141, Tabor J, et\u00a0al (2020) Protopshare: Prototype sharing for interpretable image classification and similarity discovery. Preprint at arXiv:2011.14340","DOI":"10.1145\/3447548.3467245"},{"key":"11216_CR73","unstructured":"Sabour S, Frosst N, Hinton GE (2017) Dynamic routing between capsules. Adv Neural Inform Process syst. 30"},{"key":"11216_CR74","doi-asserted-by":"crossref","unstructured":"Schwalbe G, Finzel B (2023) A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts. Data Mining Knowl Discov. pp 1\u201359","DOI":"10.1007\/s10618-022-00867-8"},{"key":"11216_CR75","doi-asserted-by":"crossref","unstructured":"Selvaraju RR, Cogswell M, Das A, et\u00a0al (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp 618\u2013626","DOI":"10.1109\/ICCV.2017.74"},{"key":"11216_CR76","unstructured":"Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences. In: International Conference on Machine Learning, PMLR, pp 3145\u20133153"},{"key":"11216_CR77","unstructured":"Simonyan K, Vedaldi A, Zisserman A (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. Preprint at arXiv:1312.6034"},{"key":"11216_CR78","unstructured":"Simonyan K, Vedaldi A, Zisserman A (2019) Deep inside convolutional networks: visualising image classification models and saliency maps. arxiv 2013. Preprint at arXiv:1312.6034"},{"key":"11216_CR79","doi-asserted-by":"publisher","first-page":"178","DOI":"10.1016\/j.neunet.2022.03.034","volume":"151","author":"G Singh","year":"2022","unstructured":"Singh G (2022) Think positive: an interpretable neural network for image recognition. Neural Netw 151:178\u2013189","journal-title":"Neural Netw"},{"key":"11216_CR80","doi-asserted-by":"publisher","first-page":"85198","DOI":"10.1109\/ACCESS.2021.3087583","volume":"9","author":"G Singh","year":"2021","unstructured":"Singh G, Yow KC (2021a) An interpretable deep learning model for covid-19 detection with chest x-ray images. IEEE Access 9:85198\u201385208","journal-title":"IEEE Access"},{"issue":"9","key":"11216_CR81","doi-asserted-by":"publisher","first-page":"1732","DOI":"10.3390\/diagnostics11091732","volume":"11","author":"G Singh","year":"2021","unstructured":"Singh G, Yow KC (2021b) Object or background: an interpretable deep learning model for covid-19 detection from ct-scan images. Diagnostics 11(9):1732","journal-title":"Diagnostics"},{"key":"11216_CR82","doi-asserted-by":"publisher","first-page":"41482","DOI":"10.1109\/ACCESS.2021.3064838","volume":"9","author":"G Singh","year":"2021","unstructured":"Singh G, Yow KC (2021c) These do not look like those: an interpretable deep learning model for image recognition. IEEE Access 9:41482\u201341493","journal-title":"IEEE Access"},{"key":"11216_CR83","unstructured":"Smilkov D, Thorat N, Kim B, et\u00a0al (2017) Smoothgrad: removing noise by adding noise. Preprint at arXiv:1706.03825"},{"key":"11216_CR84","doi-asserted-by":"crossref","unstructured":"Speith T (2022) A review of taxonomies of explainable artificial intelligence (xai) methods. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp 2239\u20132250","DOI":"10.1145\/3531146.3534639"},{"issue":"1","key":"11216_CR85","first-page":"1064","volume":"26","author":"T Spinner","year":"2019","unstructured":"Spinner T, Schlegel U, Sch\u00e4fer H et al (2019) explainer: a visual analytics framework for interactive and explainable machine learning. IEEE Trans Visual Comput Graphics 26(1):1064\u20131074","journal-title":"IEEE Trans Visual Comput Graphics"},{"key":"11216_CR86","unstructured":"Springenberg JT, Dosovitskiy A, Brox T, et\u00a0al (2014) Striving for simplicity: the all convolutional net. Preprint at arXiv:1412.6806"},{"key":"11216_CR87","doi-asserted-by":"crossref","unstructured":"Staniak M, Biecek P (2018) Explanations of model predictions with live and breakdown packages. Preprint at arXiv:1804.01955","DOI":"10.32614\/RJ-2018-072"},{"key":"11216_CR88","doi-asserted-by":"publisher","first-page":"647","DOI":"10.1007\/s10115-013-0679-x","volume":"41","author":"E \u0160trumbelj","year":"2014","unstructured":"\u0160trumbelj E, Kononenko I (2014) Explaining prediction models and individual predictions with feature contributions. Knowl Inf Syst 41:647\u2013665","journal-title":"Knowl Inf Syst"},{"issue":"1","key":"11216_CR89","doi-asserted-by":"publisher","DOI":"10.23915\/distill.00022","volume":"5","author":"P Sturmfels","year":"2020","unstructured":"Sturmfels P, Lundberg S, Lee SI (2020) Visualizing the impact of feature attribution baselines. Distill 5(1):e22","journal-title":"Distill"},{"key":"11216_CR90","unstructured":"Sundararajan M, Taly A, Yan Q (2017) Axiomatic attribution for deep networks. In: International Conference on Machine Learning, PMLR, pp 3319\u20133328"},{"key":"11216_CR91","doi-asserted-by":"crossref","unstructured":"Szepannek G, L\u00fcbke K (2023) How much do we see? On the explainability of partial dependence plots for credit risk scoring. Argumenta Oeconomica. 1(50)","DOI":"10.15611\/aoe.2023.1.07"},{"key":"11216_CR92","first-page":"841","volume":"31","author":"S Wachter","year":"2017","unstructured":"Wachter S, Mittelstadt B, Russell C (2017) Counterfactual explanations without opening the black box: automated decisions and the gdpr. Harv JL & Tech 31:841","journal-title":"Harv JL & Tech"},{"key":"11216_CR93","doi-asserted-by":"crossref","unstructured":"Wang H, Wang Z, Du M, et\u00a0al (2020) Score-cam: Score-weighted visual explanations for convolutional neural networks. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp 24\u201325","DOI":"10.1109\/CVPRW50498.2020.00020"},{"key":"11216_CR94","doi-asserted-by":"crossref","unstructured":"Wang J, Liu H, Wang X, et\u00a0al (2021) Interpretable image recognition by constructing transparent embedding space. 2021 IEEE. In: CVF International Conference on Computer Vision (ICCV), pp 875\u2013884","DOI":"10.1109\/ICCV48922.2021.00093"},{"key":"11216_CR95","doi-asserted-by":"crossref","unstructured":"Wickramanayake S, Hsu W, Lee ML (2021) Comprehensible convolutional neural networks via guided concept learning. In: 2021 International Joint Conference on Neural Networks (IJCNN), IEEE, pp 1\u20138","DOI":"10.1109\/IJCNN52387.2021.9534269"},{"key":"11216_CR96","first-page":"20554","volume":"33","author":"CK Yeh","year":"2020","unstructured":"Yeh CK, Kim B, Arik S et al (2020) On completeness-aware concept-based explanations in deep neural networks. Adv Neural Inf Process Syst 33:20554\u201320565","journal-title":"Adv Neural Inf Process Syst"},{"issue":"5","key":"11216_CR97","first-page":"5782","volume":"45","author":"H Yuan","year":"2022","unstructured":"Yuan H, Yu H, Gui S et al (2022) Explainability in graph neural networks: a taxonomic survey. IEEE Trans Pattern Anal Mach Intell 45(5):5782\u20135799","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"11216_CR98","doi-asserted-by":"crossref","unstructured":"Zhang Q, Cao R, Wu YN, et\u00a0al (2017) Growing interpretable part graphs on convnets via multi-shot learning. In: Proceedings of the AAAI Conference on Artificial Intelligence","DOI":"10.1609\/aaai.v31i1.10924"},{"key":"11216_CR99","doi-asserted-by":"crossref","unstructured":"Zhang Q, Cao R, Shi F, et\u00a0al (2018) Interpreting cnn knowledge via an explanatory graph. In: Proceedings of the AAAI Conference on Artificial Intelligence","DOI":"10.1609\/aaai.v32i1.11819"},{"key":"11216_CR100","doi-asserted-by":"crossref","unstructured":"Zhang Q, Yang Y, Ma H, et\u00a0al (2019) Interpreting cnns via decision trees. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 6261\u20136270","DOI":"10.1109\/CVPR.2019.00642"},{"key":"11216_CR101","doi-asserted-by":"crossref","unstructured":"Zhou B, Khosla A, Lapedriza A, et\u00a0al (2016) Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2921\u20132929","DOI":"10.1109\/CVPR.2016.319"},{"issue":"9","key":"11216_CR102","doi-asserted-by":"publisher","first-page":"2131","DOI":"10.1109\/TPAMI.2018.2858759","volume":"41","author":"B Zhou","year":"2018","unstructured":"Zhou B, Bau D, Oliva A et al (2018) Interpreting deep visual representations via network dissection. IEEE Trans Pattern Anal Mach Intell 41(9):2131\u20132145","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"11216_CR103","doi-asserted-by":"crossref","unstructured":"Zhu Y, Nie JY, Su Y, et\u00a0al (2022) From easy to hard: a dual curriculum learning framework for context-aware document ranking. In: Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pp 2784\u20132794","DOI":"10.1145\/3511808.3557328"}],"container-title":["Artificial Intelligence Review"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10462-025-11216-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10462-025-11216-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10462-025-11216-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,23]],"date-time":"2025-06-23T06:35:41Z","timestamp":1750660541000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10462-025-11216-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,14]]},"references-count":103,"journal-issue":{"issue":"8","published-online":{"date-parts":[[2025,8]]}},"alternative-id":["11216"],"URL":"https:\/\/doi.org\/10.1007\/s10462-025-11216-8","relation":{"has-preprint":[{"id-type":"doi","id":"10.21203\/rs.3.rs-4824427\/v1","asserted-by":"object"}]},"ISSN":["1573-7462"],"issn-type":[{"value":"1573-7462","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,5,14]]},"assertion":[{"value":"29 March 2025","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"14 May 2025","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no Conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"As no individual participants were involved in the study, no informed consent was required.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Informed consent"}},{"value":"This article does not contain any studies with human participants or animals performed by any of the authors.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Research involving human and animal participants"}}],"article-number":"247"}}