{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,14]],"date-time":"2026-03-14T11:45:11Z","timestamp":1773488711783,"version":"3.50.1"},"publisher-location":"Cham","reference-count":25,"publisher":"Springer Nature Switzerland","isbn-type":[{"value":"9783032083166","type":"print"},{"value":"9783032083173","type":"electronic"}],"license":[{"start":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T00:00:00Z","timestamp":1760227200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T00:00:00Z","timestamp":1760227200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2026]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Counterfactual explanations are a widely used approach in Explainable AI, offering actionable insights into decision-making by illustrating how small changes to input data can lead to different outcomes. Despite their importance, evaluating the quality of counterfactual explanations remains an open problem. Traditional quantitative metrics, such as sparsity or proximity, fail to fully account for human preferences in explanations, while user studies are insightful but not scalable. Moreover, relying only on a single overall satisfaction rating does not lead to a nuanced understanding of why certain explanations are effective or not. To address this, we analyze a dataset of counterfactual explanations that were evaluated by 206 human participants, who rated not only overall satisfaction but also seven explanatory criteria: feasibility, coherence, complexity, understandability, completeness, fairness, and trust. Modeling overall satisfaction as a function of these criteria, we find that feasibility (the actionability of suggested changes) and trust (the belief that the changes would lead to the desired outcome) consistently stand out as the strongest predictors of user satisfaction, though completeness also emerges as a meaningful contributor. Crucially, even excluding feasibility and trust, other metrics explain 58% of the variance, highlighting the importance of additional explanatory qualities. Complexity appears independent, suggesting more detailed explanations do not necessarily reduce satisfaction. Strong metric correlations imply a latent structure in how users judge quality, and demographic background (e.g., medical or ML expertise) significantly affects ranking patterns, highlighting the need for context-specific designs. These insights directly inform the development of improved counterfactual algorithms, highlighting the need to tailor explanatory qualities (completeness, consistency, fairness, complexity) to diverse user expertise and specific domain contexts.<\/jats:p>","DOI":"10.1007\/978-3-032-08317-3_10","type":"book-chapter","created":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T03:37:17Z","timestamp":1760153837000},"page":"210-229","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Predicting Satisfaction of\u00a0Counterfactual Explanations from\u00a0Human Ratings of\u00a0Explanatory Qualities"],"prefix":"10.1007","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5414-6089","authenticated-orcid":false,"given":"Marharyta","family":"Domnich","sequence":"first","affiliation":[]},{"given":"Rasmus Moorits","family":"Veski","sequence":"additional","affiliation":[]},{"given":"Julius","family":"V\u00e4lja","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6263-4098","authenticated-orcid":false,"given":"Kadi","family":"Tulver","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2497-0007","authenticated-orcid":false,"given":"Raul","family":"Vicente","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,10,12]]},"reference":[{"key":"10_CR1","doi-asserted-by":"publisher","first-page":"52138","DOI":"10.1109\/ACCESS.2018.2870052","volume":"6","author":"A Adadi","year":"2018","unstructured":"Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138\u201352160 (2018)","journal-title":"IEEE Access"},{"key":"10_CR2","doi-asserted-by":"crossref","unstructured":"Bansal, G., Nushi, B., Kamar, E., Lasecki, W.S., Weld, D.S., Horvitz, E.: Beyond accuracy: the role of mental models in human-AI team performance. In: Proceedings of the AAAI Con Human Computation and Crowdsourcing, vol.\u00a07, pp. 2\u201311 (2019)","DOI":"10.1609\/hcomp.v7i1.5285"},{"key":"10_CR3","unstructured":"Barbu, E., Domnich, M., Vicente, R., Sakkas, N., Morim, A.: Exploring commonalities in explanation frameworks: a multi-domain survey analysis. arXiv preprint arXiv:2405.11958 (2024)"},{"key":"10_CR4","doi-asserted-by":"publisher","unstructured":"Byrne, R.M.J.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pp. 6276\u20136282. International Joint Conferences on Artificial Intelligence Organization (2019). https:\/\/doi.org\/10.24963\/ijcai.2019\/876","DOI":"10.24963\/ijcai.2019\/876"},{"key":"10_CR5","doi-asserted-by":"crossref","unstructured":"Byrne, R.M.: The rational imagination (2005)","DOI":"10.7551\/mitpress\/5756.001.0001"},{"key":"10_CR6","unstructured":"De\u00a0Bona, F.B., Dominici, G., Miller, T., Langheinrich, M., Gjoreski, M.: Evaluating explanations through LLMs: beyond traditional user studies. arXiv preprint arXiv:2410.17781 (2024)"},{"key":"10_CR7","doi-asserted-by":"crossref","unstructured":"Domnich, M., et al.: Towards unifying evaluation of counterfactual explanations: leveraging large language models for human-centric assessments. arXiv preprint arXiv:2410.21131 (2024)","DOI":"10.1609\/aaai.v39i15.33791"},{"key":"10_CR8","doi-asserted-by":"publisher","first-page":"60","DOI":"10.1007\/978-3-031-63800-8_4","volume-title":"Explainable Artif. Intell.","author":"M Domnich","year":"2024","unstructured":"Domnich, M., Vicente, R.: Enhancing counterfactual explanation search with diffusion distance and directional coherence. In: Longo, L., Lapuschkin, S., Seifert, C. (eds.) Explainable Artif. Intell., pp. 60\u201384. Springer Nature Switzerland, Cham (2024). https:\/\/doi.org\/10.1007\/978-3-031-63800-8_4"},{"key":"10_CR9","doi-asserted-by":"publisher","unstructured":"Domnich, M., et al.: Countereval: towards unifying evaluation of counterfactual explanations (2024). https:\/\/doi.org\/10.57967\/hf\/3824","DOI":"10.57967\/hf\/3824"},{"key":"10_CR10","doi-asserted-by":"publisher","unstructured":"Ehsan, U., et al.: The who in XAI: how ai background shapes perceptions of AI explanations. In: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. CHI \u201924, Association for Computing Machinery, New York, NY, USA (2024). https:\/\/doi.org\/10.1145\/3613904.3642474","DOI":"10.1145\/3613904.3642474"},{"key":"10_CR11","unstructured":"F\u00f6rster, M., H\u00fchn, P., Klier, M., Kluge, K.: Capturing users\u2019 reality: a novel approach to generate coherent counterfactual explanations. In: 54th Hawaii International Conference on System Sciences, HICSS 2021, Kauai, Hawaii, USA, January 5, 2021, pp. 1\u201310. ScholarSpace (2021). http:\/\/hdl.handle.net\/10125\/70767"},{"issue":"11","key":"10_CR12","doi-asserted-by":"publisher","first-page":"e745","DOI":"10.1016\/S2589-7500(21)00208-9","volume":"3","author":"M Ghassemi","year":"2021","unstructured":"Ghassemi, M., Oakden-Rayner, L., Beam, A.L.: The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit. Health 3(11), e745\u2013e750 (2021)","journal-title":"Lancet Digit. Health"},{"issue":"5","key":"10_CR13","doi-asserted-by":"publisher","first-page":"3111","DOI":"10.1007\/s10994-023-06319-8","volume":"113","author":"S Goethals","year":"2024","unstructured":"Goethals, S., Martens, D., Calders, T.: PreCoF: counterfactual explanations for fairness. Mach. Learn. 113(5), 3111\u20133142 (2024)","journal-title":"Mach. Learn."},{"key":"10_CR14","doi-asserted-by":"crossref","unstructured":"Guidotti, R.: Counterfactual explanations and how to find them: literature review and benchmarking. Data Min. Knowl. Disc. (2022)","DOI":"10.1007\/s10618-022-00831-6"},{"issue":"5","key":"10_CR15","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3527848","volume":"55","author":"AH Karimi","year":"2022","unstructured":"Karimi, A.H., Barthe, G., Sch\u00f6lkopf, B., Valera, I.: A survey of algorithmic recourse: contrastive explanations and consequential recommendations. ACM Comput. Surv. 55(5), 1\u201329 (2022)","journal-title":"ACM Comput. Surv."},{"key":"10_CR16","doi-asserted-by":"crossref","unstructured":"Keane, M.T., Kenny, E.M., Delaney, E., Smyth, B.: If only we had better counterfactual explanations: five key deficits to rectify in the evaluation of counterfactual XAI techniques (2021). https:\/\/arxiv.org\/abs\/2103.01035","DOI":"10.24963\/ijcai.2021\/609"},{"key":"10_CR17","doi-asserted-by":"crossref","unstructured":"Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1\u201338 (2019)","DOI":"10.1016\/j.artint.2018.07.007"},{"key":"10_CR18","doi-asserted-by":"publisher","unstructured":"Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 607\u2013617. FAT* \u201920, Association for Computing Machinery (2020). https:\/\/doi.org\/10.1145\/3351095.3372850, event-place: Barcelona, Spain","DOI":"10.1145\/3351095.3372850"},{"key":"10_CR19","doi-asserted-by":"crossref","unstructured":"Rasouli, P., Chieh, Yu., I.: Care: Coherent actionable recourse based on sound counterfactual explanations. Int. J. Data Sci. Anal. 17(1), 13\u201338 (2024)","DOI":"10.1007\/s41060-022-00365-6"},{"key":"10_CR20","doi-asserted-by":"crossref","unstructured":"Van\u00a0Looveren, A., Klaise, J.: Interpretable counterfactual explanations guided by prototypes. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 650\u2013665. Springer (2021)","DOI":"10.1007\/978-3-030-86520-7_40"},{"key":"10_CR21","doi-asserted-by":"crossref","unstructured":"VanNostrand, P.M., Hofmann, D.M., Ma, L., Rundensteiner, E.A.: Actionable recourse for automated decisions: examining the effects of counterfactual explanation type and presentation on lay user understanding. In: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, pp. 1682\u20131700 (2024)","DOI":"10.1145\/3630106.3658997"},{"key":"10_CR22","first-page":"841","volume":"31","author":"S Wachter","year":"2017","unstructured":"Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. JL Tech. 31, 841 (2017)","journal-title":"Harv. JL Tech."},{"key":"10_CR23","doi-asserted-by":"crossref","unstructured":"Wang, X., Yin, M.: Are explanations helpful? A comparative study of the effects of explanations in AI-assisted decision-making. In: Proceedings of the 26th International Conference on Intelligent User Interfaces, pp. 318\u2013328 (2021)","DOI":"10.1145\/3397481.3450650"},{"key":"10_CR24","doi-asserted-by":"publisher","unstructured":"Warren, G., Byrne, R.M.J., Keane, M.T.: Categorical and continuous features in counterfactual explanations of ai systems. In: Proceedings of the 28th International Conference on Intelligent User Interfaces, pp. 171\u2013187. IUI \u201923, Association for Computing Machinery, New York, NY, USA (2023). https:\/\/doi.org\/10.1145\/3581641.3584090","DOI":"10.1145\/3581641.3584090"},{"key":"10_CR25","doi-asserted-by":"publisher","first-page":"1488","DOI":"10.3758\/s13423-017-1258-z","volume":"24","author":"JC Zemla","year":"2017","unstructured":"Zemla, J.C., Sloman, S., Bechlivanidis, C., Lagnado, D.A.: Evaluating everyday explanations. Psychon. Bull. Rev. 24, 1488\u20131500 (2017)","journal-title":"Psychon. Bull. Rev."}],"container-title":["Communications in Computer and Information Science","Explainable Artificial Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-032-08317-3_10","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T04:04:08Z","timestamp":1760155448000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-032-08317-3_10"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,12]]},"ISBN":["9783032083166","9783032083173"],"references-count":25,"URL":"https:\/\/doi.org\/10.1007\/978-3-032-08317-3_10","relation":{},"ISSN":["1865-0929","1865-0937"],"issn-type":[{"value":"1865-0929","type":"print"},{"value":"1865-0937","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,12]]},"assertion":[{"value":"12 October 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"The authors have no competing interests to declare that\u00a0are relevant to the content of this article.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Disclosure of Interests"}},{"value":"The implementation of the models and analyses presented in this paper is publicly available at .","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Code Availability"}},{"value":"xAI","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"World Conference on Explainable Artificial Intelligence","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Istanbul","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"T\u00fcrkiye","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2025","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"9 July 2025","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"11 July 2025","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"3","order":9,"name":"conference_number","label":"Conference Number","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"xai2025","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/xaiworldconference.com\/2025\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}}]}}