{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,5]],"date-time":"2026-03-05T07:56:06Z","timestamp":1772697366472,"version":"3.50.1"},"reference-count":32,"publisher":"Frontiers Media SA","license":[{"start":{"date-parts":[[2026,3,5]],"date-time":"2026-03-05T00:00:00Z","timestamp":1772668800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100010409","name":"Nationaal Regieorgaan Praktijkgericht Onderzoek SIA","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100010409","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["frontiersin.org"],"crossmark-restriction":true},"short-container-title":["Front. Artif. Intell."],"abstract":"<jats:p>As artificial intelligence (AI) systems are increasingly deployed in high-risk financial decision-making contexts, the demand for transparency and interpretability becomes critical. Explainable AI (XAI) has emerged as a key research domain addressing these needs. While most existing XAI studies emphasize objective quality measures such as correctness and completeness of explanations, they often overlook the role of end-user requirements and the broader ecosystem of stakeholders. This study presents a human-centered evaluation of different visual explanation designs in financial AI applications, assessing their effectiveness. A two-phase mixed-method evaluation was conducted, combining user studies with end-users and a stakeholder workshop, to rank visual prototypes across four explanation types: feature importance, counterfactuals, contrastive\/similar examples, and rule-based explanations. A key finding is the divergence between end-users and other stakeholders\u2014including compliance officers, XAI consultants, and developers\u2014with end-users indicating a preference for concise, contextually visual explanations (e.g., small sets of decision rules or risk plots relative to similar cases), while other stakeholders often favor more complete, technically detailed representations. This highlights a critical trade-off between interpretability and completeness. This suggests that visual encoding choices may affect the effectiveness of AI explanations across different stakeholder groups.<\/jats:p>","DOI":"10.3389\/frai.2026.1668029","type":"journal-article","created":{"date-parts":[[2026,3,5]],"date-time":"2026-03-05T06:56:43Z","timestamp":1772693803000},"update-policy":"https:\/\/doi.org\/10.3389\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Designing effective explainable AI: a human-centered evaluation of explanation formats in financial decision-making"],"prefix":"10.3389","volume":"9","author":[{"given":"Henry","family":"Maathuis","sequence":"first","affiliation":[{"name":"Research Group Artificial Intelligence, HU University of Applied Sciences Utrecht","place":["Utrecht, Netherlands"]},{"name":"Jheronimus Academy of Data Science, Tilburg University, Eindhoven University of Technology","place":["\u2018s-Hertogenbosch, Netherlands"]}]},{"given":"Marcel","family":"Stalenhoef","sequence":"additional","affiliation":[{"name":"Research Group Human Experience & Media Design, HU University of Applied Sciences Utrecht","place":["Utrecht, Netherlands"]}]},{"given":"Sieuwert","family":"van Otterloo","sequence":"additional","affiliation":[{"name":"Research Group Artificial Intelligence, HU University of Applied Sciences Utrecht","place":["Utrecht, Netherlands"]}]},{"given":"Raymond","family":"Zwaal","sequence":"additional","affiliation":[{"name":"Center for Financial Innovation, Amsterdam University of Applied Sciences","place":["Amsterdam, Netherlands"]}]},{"given":"Kees","family":"van Montfort","sequence":"additional","affiliation":[{"name":"Center for Financial Innovation, Amsterdam University of Applied Sciences","place":["Amsterdam, Netherlands"]}]},{"given":"Danielle","family":"Sent","sequence":"additional","affiliation":[{"name":"Research Group Artificial Intelligence, HU University of Applied Sciences Utrecht","place":["Utrecht, Netherlands"]},{"name":"Jheronimus Academy of Data Science, Tilburg University, Eindhoven University of Technology","place":["\u2018s-Hertogenbosch, Netherlands"]}]}],"member":"1965","published-online":{"date-parts":[[2026,3,5]]},"reference":[{"key":"B1","doi-asserted-by":"publisher","first-page":"101805","DOI":"10.1016\/j.inffus.2023.101805","article-title":"Explainable artificial intelligence (XAI): What we know and what is left to attain trustworthy artificial intelligence","volume":"99","author":"Ali","year":"2023","journal-title":"Inform. Fusion"},{"key":"B2","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s12911-020-01332-6","article-title":"Explainability for artificial intelligence in healthcare: a multidisciplinary perspective","volume":"20","author":"Amann","year":"2020","journal-title":"BMC Med. Inform. Decis. Mak"},{"key":"B3","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1016\/j.inffus.2019.12.012","article-title":"Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible ai","volume":"58","author":"Arrieta","year":"2020","journal-title":"Inform. Fusion"},{"key":"B4","doi-asserted-by":"publisher","first-page":"688969","DOI":"10.3389\/fdata.2021.688969","article-title":"Principles and practice of explainable machine learning","volume":"4","author":"Belle","year":"2021","journal-title":"Front. Big Data"},{"key":"B5","doi-asserted-by":"crossref","first-page":"78","DOI":"10.1145\/3514094.3534164","article-title":"\u201cHow cognitive biases affect xai-assisted decision-making: a systematic review,\u201d","volume-title":"Proceedings of the 2022 AAAI\/ACM Conference on AI, Ethics, and Society","author":"Bertrand","year":"2022"},{"key":"B6","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3334480.3383047","article-title":"\u201cWhat do people really want when they say they want \u201cexplainable ai\u201d? we asked 60 stakeholders,\u201d","volume-title":"Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems","author":"Brennen","year":"2020"},{"key":"B7","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.1702.08608","article-title":"Towards a rigorous science of interpretable machine learning","author":"Doshi-Velez","year":"2017","journal-title":"arXiv"},{"key":"B8","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3411764.3445188","article-title":"\u201cExpanding explainability: Towards social transparency in ai systems,\u201d","volume-title":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","author":"Ehsan","year":"2021"},{"key":"B9","unstructured":"\u201cProposal for a regulation laying down harmonised rules on artificial intelligence and amending certain union legislative acts,\u201d\n          \n          Technical Report, European Commission\n          \n          2021"},{"key":"B10","unstructured":"The Artificial Intelligence Act \u201cEnsuring trustworthy AI,\u201d\n          \n          \n          2024"},{"key":"B11","first-page":"1","article-title":"\u201cRegulation (EU) 2016\/679 of the european parliament and of the council of 27 april 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95\/46\/ec (general data protection regulation),\u201d","year":"2016","journal-title":"EU Regulation 2016\/679, European Union. Official Journal of the European Union, L 119"},{"key":"B12","article-title":"\u201cThe impact of human-ai interaction on discrimination,\u201d","author":"Gaudeul","year":"2025","journal-title":"Techreport KJ-01-24-180-EN-N (online)"},{"key":"B13","doi-asserted-by":"crossref","first-page":"80","DOI":"10.1109\/DSAA.2018.00018","article-title":"\u201cExplaining explanations: An overview of interpretability of machine learning,\u201d","volume-title":"2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA)","author":"Gilpin","year":"2018"},{"key":"B14","doi-asserted-by":"publisher","first-page":"45","DOI":"10.1007\/s12559-023-10179-8","article-title":"Interpreting black-box models: a review on explainable artificial intelligence","volume":"16","author":"Hassija","year":"2024","journal-title":"Cognit. Comput"},{"key":"B15","doi-asserted-by":"publisher","first-page":"63","DOI":"10.2469\/faj.v46.n6.63","article-title":"Artificial neural systems: a new tool for financial decision-making","volume":"46","author":"Hawley","year":"1990","journal-title":"Finan. Analysts J"},{"key":"B16","unstructured":"\u201cThe expert group's policy and investment recommendations for trustworthy AI,\u201d\n          \n          Technical report, European Commission\n          \n          2019"},{"key":"B17","doi-asserted-by":"crossref","first-page":"237","DOI":"10.1007\/978-3-030-88900-5_27","article-title":"\u201cExplainable artificial intelligence (xai): how the visualization of ai predictions affects user cognitive load and confidence,\u201d","volume-title":"Information Systems and Neuroscience: NeuroIS Retreat 2021","author":"Hudon","year":"2021"},{"key":"B18","first-page":"1","article-title":"\u201cInterpreting interpretability: understanding data scientists' use of interpretability tools for machine learning,\u201d","volume-title":"Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems","author":"Kaur","year":"2020"},{"key":"B19","doi-asserted-by":"publisher","first-page":"1456486","DOI":"10.3389\/frai.2024.1456486","article-title":"Human-centered evaluation of explainable ai applications: a systematic review","volume":"7","author":"Kim","year":"","journal-title":"Front. Artif. Intellig"},{"key":"B20","article-title":"\u201cIdentifying XAI user needs: gaps between literature and use cases in the financial sector,\u201d","author":"Kim","year":"","journal-title":"HHAI-WS 2024: Workshops at the Third International Conference on Hybrid Human-Artificial Intelligence (HHAI)"},{"key":"B21","doi-asserted-by":"publisher","first-page":"103160","DOI":"10.1016\/j.ijhcs.2023.103160","article-title":"Do stakeholder needs differ?-designing stakeholder-tailored explainable artificial intelligence (XAI) interfaces","volume":"181","author":"Kim","year":"","journal-title":"Int. J. Hum. Comput. Stud"},{"key":"B22","doi-asserted-by":"publisher","first-page":"103473","DOI":"10.1016\/j.artint.2021.103473","article-title":"What do we want from explainable artificial intelligence (XAI)?-a stakeholder perspective on xai and a conceptual model guiding interdisciplinary xai research","volume":"296","author":"Langer","year":"2021","journal-title":"Artif. Intell"},{"key":"B23","first-page":"1","article-title":"\u201cQuestioning the ai: informing design practices for explainable ai user experiences,\u201d","volume-title":"Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems","author":"Liao","year":"2020"},{"key":"B24","doi-asserted-by":"publisher","first-page":"147","DOI":"10.1609\/hcomp.v10i1.21995","article-title":"\u201cConnecting algorithmic research and usage contexts: a perspective of contextualized evaluation for explainable AI,\u201d","author":"Liao","year":"2022","journal-title":"Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 10"},{"key":"B25","doi-asserted-by":"publisher","first-page":"31","DOI":"10.1145\/3236386.3241340","article-title":"The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery","volume":"16","author":"Lipton","year":"2018","journal-title":"Queue"},{"key":"B26","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.artint.2018.07.007","article-title":"Explanation in artificial intelligence: insights from the social sciences","volume":"267","author":"Miller","year":"2019","journal-title":"Artif. Intell"},{"key":"B27","doi-asserted-by":"publisher","first-page":"3","DOI":"10.1145\/3387166","article-title":"A multidisciplinary survey and framework for design and evaluation of explainable ai systems","volume":"11","author":"Mohseni","year":"2021","journal-title":"ACM Trans. Interact. Intell. Syst"},{"key":"B28","author":"Norman","year":"2013","journal-title":"The Design of Everyday Things: Revised and Expanded Edition"},{"key":"B29","doi-asserted-by":"publisher","first-page":"1240005","DOI":"10.1142\/S0218213012400052","article-title":"Ai tools in decision making support systems: a review","volume":"21","author":"Phillips-Wren","year":"2012","journal-title":"Int. J. Artif. Intellig. Tools"},{"key":"B30","doi-asserted-by":"publisher","first-page":"257","DOI":"10.1207\/s15516709cog1202_4","article-title":"Cognitive load during problem solving: Effects on learning","volume":"12","author":"Sweller","year":"1988","journal-title":"Cogn. Sci"},{"key":"B31","author":"Tidwell","year":"2010","journal-title":"Designing Interfaces: Patterns for Effective Interaction Design"},{"key":"B32","doi-asserted-by":"publisher","first-page":"97","DOI":"10.1016\/0010-0285(80)90005-5","article-title":"A feature-integration theory of attention","volume":"12","author":"Treisman","year":"1980","journal-title":"Cogn. Psychol"}],"container-title":["Frontiers in Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/www.frontiersin.org\/articles\/10.3389\/frai.2026.1668029\/full","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,5]],"date-time":"2026-03-05T06:56:57Z","timestamp":1772693817000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.frontiersin.org\/articles\/10.3389\/frai.2026.1668029\/full"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,5]]},"references-count":32,"alternative-id":["10.3389\/frai.2026.1668029"],"URL":"https:\/\/doi.org\/10.3389\/frai.2026.1668029","relation":{},"ISSN":["2624-8212"],"issn-type":[{"value":"2624-8212","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,3,5]]},"article-number":"1668029"}}