{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,31]],"date-time":"2026-03-31T02:06:30Z","timestamp":1774922790253,"version":"3.50.1"},"reference-count":41,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,6,14]],"date-time":"2025-06-14T00:00:00Z","timestamp":1749859200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,6,14]],"date-time":"2025-06-14T00:00:00Z","timestamp":1749859200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["npj Digit. Med."],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>The rapid growth of clinical explainable AI (XAI) models raised concerns over unclear purposes and false hope regarding explanations. Currently, no standardised metrics exist for XAI evaluation. We developed a clinician-informed, 14-item checklist including clinical, machine and decision attributes. This is the first step toward XAI standardisation and transparent reporting XAI methods to enhance trust, reduce risks, foster AI adoption, and improve decisions to determine the true clinical potential of applied XAI.<\/jats:p>","DOI":"10.1038\/s41746-025-01764-2","type":"journal-article","created":{"date-parts":[[2025,6,14]],"date-time":"2025-06-14T16:10:30Z","timestamp":1749917430000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":11,"title":["Clinician-informed XAI evaluation checklist with metrics (CLIX-M) for AI-powered clinical decision support systems"],"prefix":"10.1038","volume":"8","author":[{"given":"Aida","family":"Brankovic","sequence":"first","affiliation":[]},{"given":"David","family":"Cook","sequence":"additional","affiliation":[]},{"given":"Jessica","family":"Rahman","sequence":"additional","affiliation":[]},{"given":"Alana","family":"Delaforce","sequence":"additional","affiliation":[]},{"given":"Jane","family":"Li","sequence":"additional","affiliation":[]},{"given":"Farah","family":"Magrabi","sequence":"additional","affiliation":[]},{"given":"Federico","family":"Cabitza","sequence":"additional","affiliation":[]},{"given":"Enrico","family":"Coiera","sequence":"additional","affiliation":[]},{"given":"DanaKai","family":"Bradford","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,6,14]]},"reference":[{"key":"1764_CR1","doi-asserted-by":"publisher","unstructured":"Amiri, Z., Taghavirashidizadeh, A. & Khorrami, P. AI-driven decision-making in healthcare information systems: a comprehensive review. J. Syst. Softw. 112470, https:\/\/doi.org\/10.1016\/j.jss.2025.112470 (2025).","DOI":"10.1016\/j.jss.2025.112470"},{"key":"1764_CR2","first-page":"337","volume":"11","author":"S Maleki Varnosfaderani","year":"2024","unstructured":"Maleki Varnosfaderani, S. & Forouzanfar, M. The role of AI in hospitals and clinics: transforming healthcare in the 21st century. Bioeng. Basel Switz. 11, 337 (2024).","journal-title":"Bioeng. Basel Switz."},{"key":"1764_CR3","doi-asserted-by":"publisher","first-page":"105","DOI":"10.1016\/j.techfore.2015.12.014","volume":"105","author":"M Hengstler","year":"2016","unstructured":"Hengstler, M., Enkel, E. & Duelli, S. Applied artificial intelligence and trust\u2014The case of autonomous vehicles and medical assistance devices. Technol. Forecast. Soc. Change 105, 105\u2013120 (2016).","journal-title":"Technol. Forecast. Soc. Change"},{"key":"1764_CR4","doi-asserted-by":"publisher","first-page":"592","DOI":"10.1093\/jamia\/ocz229","volume":"27","author":"WK Diprose","year":"2020","unstructured":"Diprose, W. K. et al. Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. J. Am. Med. Inform. Assoc. 27, 592\u2013600 (2020).","journal-title":"J. Am. Med. Inform. Assoc."},{"key":"1764_CR5","doi-asserted-by":"publisher","first-page":"25","DOI":"10.1016\/j.breast.2019.10.001","volume":"49","author":"SM Carter","year":"2020","unstructured":"Carter, S. M. et al. The ethical, legal and social implications of using artificial intelligence systems in breast cancer care. Breast 49, 25\u201332 (2020).","journal-title":"Breast"},{"key":"1764_CR6","doi-asserted-by":"publisher","first-page":"102506","DOI":"10.1016\/j.artmed.2023.102506","volume":"138","author":"F Cabitza","year":"2023","unstructured":"Cabitza, F. et al. Rams, hounds and white boxes: investigating human\u2013AI collaboration protocols in medical diagnosis. Artif. Intell. Med. 138, 102506 (2023).","journal-title":"Artif. Intell. Med."},{"key":"1764_CR7","doi-asserted-by":"publisher","unstructured":"Cabitza, F., Fregosi, C., Campagner, A. & Natali, C. Explanations considered harmful: the impact of misleading explanations on accuracy in hybrid human-AI decision making. In: Explainable artificial intelligence (eds. Longo, L., Lapuschkin, S. & Seifert, C.) 255\u2013269 (Springer Nature, 2024). https:\/\/doi.org\/10.1007\/978-3-031-63803-9_14.","DOI":"10.1007\/978-3-031-63803-9_14"},{"key":"1764_CR8","doi-asserted-by":"publisher","unstructured":"Brankovic, A., Huang, W., Cook, D., Khanna, S. & Bialkowski, K. Elucidating discrepancy in explanations of predictive models developed using EMR. In MEDINFO 2023 \u2014 the future is accessible, 865\u2013869 (IOS Press, 2024). https:\/\/doi.org\/10.3233\/SHTI231088.","DOI":"10.3233\/SHTI231088"},{"key":"1764_CR9","doi-asserted-by":"crossref","unstructured":"Brankovic, A., Cook, D., Rahman, J., Khanna, S. & Huang, W. Benchmarking the most popular XAI used for explaining clinical predictive models: Untrustworthy but could be useful. Health Inform. J. 30, 14604582241304730 (2024).","DOI":"10.1177\/14604582241304730"},{"key":"1764_CR10","unstructured":"Tonekaboni, S., Joshi, S., McCradden, M. D. & Goldenberg, A. What clinicians want: contextualizing explainable machine learning for clinical end use. In: Proceedings of the 4th machine learning for healthcare conference 359\u2013380 (PMLR, 2019)."},{"key":"1764_CR11","doi-asserted-by":"publisher","first-page":"55","DOI":"10.7326\/M14-0697","volume":"162","author":"GS Collins","year":"2015","unstructured":"Collins, G. S., Reitsma, J. B., Altman, D. G. & Moons, K. G. M. Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): the TRIPOD statement. Ann. Intern. Med. 162, 55\u201363 (2015).","journal-title":"Ann. Intern. Med."},{"key":"1764_CR12","doi-asserted-by":"publisher","unstructured":"Collins, G. S. et al. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ e078378, https:\/\/doi.org\/10.1136\/bmj-2023-078378 (2024).","DOI":"10.1136\/bmj-2023-078378"},{"key":"1764_CR13","doi-asserted-by":"publisher","first-page":"e200029","DOI":"10.1148\/ryai.2020200029","volume":"2","author":"J Mongan","year":"2020","unstructured":"Mongan, J., Moy, L. & Kahn, C. E. Checklist for artificial intelligence in medical imaging (CLAIM): a guide for authors and reviewers. Radiol. Artif. Intell. 2, e200029 (2020).","journal-title":"Radiol. Artif. Intell."},{"key":"1764_CR14","doi-asserted-by":"crossref","unstructured":"Nauta, M. et al. From anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable AI. ACM Comput. Surv. 55, 295:1\u2013295:42 (2023).","DOI":"10.1145\/3583558"},{"key":"1764_CR15","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2024.102301","volume":"106","author":"L Longo","year":"2024","unstructured":"Longo, L. et al. Explainable Artificial Intelligence (XAI) 2.0: a manifesto of open challenges and interdisciplinary research directions. Inf. Fusion 106, 102301 (2024).","journal-title":"Inf. Fusion"},{"key":"1764_CR16","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41746-022-00699-2","volume":"5","author":"H Chen","year":"2022","unstructured":"Chen, H., Gomez, C., Huang, C.-M. & Unberath, M. Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review. Npj Digit. Med. 5, 1\u201315 (2022).","journal-title":"Npj Digit. Med."},{"key":"1764_CR17","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3387166","volume":"11","author":"S Mohseni","year":"2021","unstructured":"Mohseni, S., Zarei, N. & Ragan, E. D. A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Trans. Interact. Intell. Syst. 11, 1\u201345 (2021).","journal-title":"ACM Trans. Interact. Intell. Syst."},{"key":"1764_CR18","doi-asserted-by":"publisher","unstructured":"Introduction to Explainable AI in Healthcare - Explainable Artificial Intelligence in the Healthcare Industry - Wiley Online Library. https:\/\/onlinelibrary.wiley.com\/doi\/, https:\/\/doi.org\/10.1002\/9781394249312.ch8 (2023).","DOI":"10.1002\/9781394249312.ch8"},{"key":"1764_CR19","doi-asserted-by":"publisher","first-page":"103444","DOI":"10.1016\/j.ijhcs.2025.103444","volume":"197","author":"FM Calisto","year":"2025","unstructured":"Calisto, F. M., Abrantes, J. M., Santiago, C., Nunes, N. J. & Nascimento, J. C. Personalized explanations for clinician-AI interaction in breast imaging diagnosis by adapting communication to expertise levels. Int. J. Hum. Comput. Stud. 197, 103444 (2025).","journal-title":"Int. J. Hum. Comput. Stud."},{"key":"1764_CR20","doi-asserted-by":"publisher","first-page":"102412","DOI":"10.1016\/j.inffus.2024.102412","volume":"108","author":"E Nasarian","year":"2024","unstructured":"Nasarian, E., Alizadehsani, R., Acharya, U. R. & Tsui, K.-L. Designing interpretable ML system to enhance trust in healthcare: a systematic review to proposed responsible clinician-AI-collaboration framework. Inf. Fusion 108, 102412 (2024).","journal-title":"Inf. Fusion"},{"key":"1764_CR21","doi-asserted-by":"publisher","unstructured":"Calisto, F. M. Human-centered design of personalized intelligent agents in medical imaging diagnosis. https:\/\/doi.org\/10.13140\/RG.2.2.28353.33126 (2024).","DOI":"10.13140\/RG.2.2.28353.33126"},{"key":"1764_CR22","doi-asserted-by":"publisher","first-page":"102684","DOI":"10.1016\/j.media.2022.102684","volume":"84","author":"W Jin","year":"2023","unstructured":"Jin, W., Li, X., Fatehi, M. & Hamarneh, G. Guidelines and evaluation of clinical explainable AI in medical image analysis. Med. Image Anal. 84, 102684 (2023).","journal-title":"Med. Image Anal."},{"key":"1764_CR23","unstructured":"Brankovic, A. et al. Mitigating ethical risks in the development of artificial intelligence (AI)-enabled tools with explainable AI (XAI) component (CSIRO, 2024)."},{"key":"1764_CR24","doi-asserted-by":"publisher","first-page":"1149","DOI":"10.1016\/S0140-6736(08)60505-X","volume":"371","author":"DG Altman","year":"2008","unstructured":"Altman, D. G., Simera, I., Hoey, J., Moher, D. & Schulz, K. EQUATOR: reporting guidelines for health research. Lancet Lond. Engl. 371, 1149\u20131150 (2008).","journal-title":"Lancet Lond. Engl."},{"key":"1764_CR25","unstructured":"Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI conference on human factors in computing systems. https:\/\/dl.acm.org\/doi\/10.1145\/3290605.3300831 (2019)."},{"key":"1764_CR26","doi-asserted-by":"publisher","first-page":"31","DOI":"10.5032\/jae.1994.04031","volume":"35","author":"DL Clason","year":"1994","unstructured":"Clason, D. L. & Dormody, T. J. Analyzing data measured by individual likert-type items. J. Agric. Educ. 35, 31\u201335 (1994).","journal-title":"J. Agric. Educ."},{"key":"1764_CR27","doi-asserted-by":"crossref","unstructured":"Boone, H. & Boone, D. Analyzing Likert data. J. Ext. 50, 2FEA7 (2012).","DOI":"10.34068\/joe.50.02.48"},{"key":"1764_CR28","unstructured":"Logic and conversation. In Speech Acts, https:\/\/brill.com\/display\/book\/edcoll\/9789004368811\/BP000003.xml."},{"key":"1764_CR29","doi-asserted-by":"publisher","first-page":"867","DOI":"10.1038\/s42256-022-00536-x","volume":"4","author":"A Saporta","year":"2022","unstructured":"Saporta, A. et al. Benchmarking saliency methods for chest X-ray interpretation. Nat. Mach. Intell. 4, 867\u2013878 (2022).","journal-title":"Nat. Mach. Intell."},{"key":"1764_CR30","unstructured":"Applied predictive modeling. SpringerLink, https:\/\/link.springer.com\/book\/10.1007\/978-1-4614-6849-3."},{"key":"1764_CR31","unstructured":"Krishna, S. et al. The disagreement problem in explainable machine learning: a practitioner\u2019s perspective. arXiv.org https:\/\/arxiv.org\/abs\/2202.01602v4 (2022)."},{"key":"1764_CR32","doi-asserted-by":"publisher","first-page":"93","DOI":"10.1080\/19312458.2011.568376","volume":"5","author":"K Krippendorff","year":"2011","unstructured":"Krippendorff, K. Agreement and information in the reliability of coding. Commun. Methods Meas. 5, 93\u2013112 (2011).","journal-title":"Commun. Methods Meas."},{"key":"1764_CR33","doi-asserted-by":"crossref","unstructured":"Seni, G. & Elder, J. Ensemble methods in data mining: improving accuracy through combining predictions (Morgan & Claypool Publishers, 2010).","DOI":"10.1007\/978-3-031-01899-2"},{"key":"1764_CR34","doi-asserted-by":"crossref","unstructured":"Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267, 1\u201338 (2019).","DOI":"10.1016\/j.artint.2018.07.007"},{"key":"1764_CR35","doi-asserted-by":"publisher","first-page":"56","DOI":"10.1038\/s42256-019-0138-9","volume":"2","author":"SM Lundberg","year":"2020","unstructured":"Lundberg, S. M. et al. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2, 56\u201367 (2020).","journal-title":"Nat. Mach. Intell."},{"key":"1764_CR36","doi-asserted-by":"publisher","unstructured":"Ribeiro, M. T., Singh, S. & Guestrin, C. \u2018Why should i trust you?\u2019: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 1135\u20131144 (ACM, 2016). https:\/\/doi.org\/10.1145\/2939672.2939778.","DOI":"10.1145\/2939672.2939778"},{"key":"1764_CR37","doi-asserted-by":"publisher","unstructured":"Asli, A. & Arad, S. Looking at explainable AI methods through the lens of causality. ERA https:\/\/era.library.ualberta.ca\/items\/eb60d7f6-3b20-4808-8d44-d3c16936bbcf, https:\/\/doi.org\/10.7939\/r3-tpxk-ka81 (2023).","DOI":"10.7939\/r3-tpxk-ka81"},{"key":"1764_CR38","first-page":"9780","volume":"33","author":"D Pedreschi","year":"2019","unstructured":"Pedreschi, D. et al. Meaningful explanations of black box AI decision systems. Proc. AAAI Conf. Artif. Intell. 33, 9780\u20139784 (2019).","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"key":"1764_CR39","doi-asserted-by":"publisher","first-page":"e745","DOI":"10.1016\/S2589-7500(21)00208-9","volume":"3","author":"M Ghassemi","year":"2021","unstructured":"Ghassemi, M., Oakden-Rayner, L. & Beam, A. L. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit. Health 3, e745\u2013e750 (2021).","journal-title":"Lancet Digit. Health"},{"key":"1764_CR40","unstructured":"Bird, S. et al. Fairlearn: A toolkit for assessing and improving fairness in AI systems. Fairlearn Developers. https:\/\/fairlearn.org (2024)."},{"key":"1764_CR41","unstructured":"Google. TensorFlow Fairness Indicators. TensorFlow https:\/\/www.tensorflow.org\/responsible_ai\/fairness_indicators\/overview (2020)."}],"container-title":["npj Digital Medicine"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.nature.com\/articles\/s41746-025-01764-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-025-01764-2","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-025-01764-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,15]],"date-time":"2025-06-15T04:01:22Z","timestamp":1749960082000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.nature.com\/articles\/s41746-025-01764-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,14]]},"references-count":41,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["1764"],"URL":"https:\/\/doi.org\/10.1038\/s41746-025-01764-2","relation":{},"ISSN":["2398-6352"],"issn-type":[{"value":"2398-6352","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6,14]]},"assertion":[{"value":"18 April 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"1 June 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"14 June 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"364"}}