{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,30]],"date-time":"2026-04-30T18:04:41Z","timestamp":1777572281573,"version":"3.51.4"},"reference-count":94,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2025,2,10]],"date-time":"2025-02-10T00:00:00Z","timestamp":1739145600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"FCT Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia","award":["No. UIDB\/50021\/2020 (DOI:10.54499\/UIDB\/50021\/2020) and No. 2022.09212.PTDC (DOI: 10.54499\/2022.09212.PTDC)"],"award-info":[{"award-number":["No. UIDB\/50021\/2020 (DOI:10.54499\/UIDB\/50021\/2020) and No. 2022.09212.PTDC (DOI: 10.54499\/2022.09212.PTDC)"]}]},{"name":"UNESCO Chair on AI&VR of the University of Lisbon"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2025,6,30]]},"abstract":"<jats:p>This study investigates the impact of machine learning models on the generation of counterfactual explanations by conducting a benchmark evaluation over three different types of models: a decision tree (fully transparent, interpretable, white-box model), a random forest (semi-interpretable, grey-box model), and a neural network (fully opaque, black-box model). We tested the counterfactual generation process using four algorithms (DiCE, WatcherCF, prototype, and GrowingSpheresCF) in the literature in 25 different datasets. Our findings indicate that: (1) Different machine learning models have little impact on the generation of counterfactual explanations; (2) Counterfactual algorithms based uniquely on proximity loss functions are not actionable and will not provide meaningful explanations; (3) One cannot have meaningful evaluation results without guaranteeing plausibility in the counterfactual generation. Algorithms that do not consider plausibility in their internal mechanisms will lead to biased and unreliable conclusions if evaluated with the current state-of-the-art metrics; (4) A counterfactual inspection analysis is strongly recommended to ensure a robust examination of counterfactual explanations and the potential identification of biases.<\/jats:p>","DOI":"10.1145\/3672553","type":"journal-article","created":{"date-parts":[[2024,6,12]],"date-time":"2024-06-12T11:11:49Z","timestamp":1718190709000},"page":"1-37","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":10,"title":["Benchmarking Instance-Centric Counterfactual Algorithms for XAI: From White Box to Black Box"],"prefix":"10.1145","volume":"57","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8826-5163","authenticated-orcid":false,"given":"Catarina","family":"Moreira","sequence":"first","affiliation":[{"name":"Human Technology Institute, University of Technology Sydney, Broadway, Australia and GI, Instituto de Engenharia de Sistemas e Computadores Investigacao e Desenvolvimento em Lisboa, Lisboa, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9881-953X","authenticated-orcid":false,"given":"Yu-Liang","family":"Chou","sequence":"additional","affiliation":[{"name":"School of Information Systems, Queensland University of Technology, Brisbane, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3352-6899","authenticated-orcid":false,"given":"Chihcheng","family":"Hsieh","sequence":"additional","affiliation":[{"name":"School of Information Systems, Queensland University of Technology, Brisbane, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7098-5480","authenticated-orcid":false,"given":"Chun","family":"Ouyang","sequence":"additional","affiliation":[{"name":"School of Information Systems, Queensland University of Technology, Brisbane, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8120-7649","authenticated-orcid":false,"given":"Jo\u00e3o","family":"Pereira","sequence":"additional","affiliation":[{"name":"GI, Instituto de Engenharia de Sistemas e Computadores Investigacao e Desenvolvimento em Lisboa, Lisboa, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5441-4637","authenticated-orcid":false,"given":"Joaquim","family":"Jorge","sequence":"additional","affiliation":[{"name":"GI, Instituto de Engenharia de Sistemas e Computadores Investigacao e Desenvolvimento em Lisboa, Lisboa, Portugal"}]}],"member":"320","published-online":{"date-parts":[[2025,2,10]]},"reference":[{"key":"e_1_3_2_2_2","unstructured":"Thibault Laugel Marie-Jeanne Lesot Christophe Marsala Xavier Renard and Marcin Detyniecki. 2018. GrowingSpheres. Retrieved from https:\/\/github.com\/thibaultlaugel\/growingspheres."},{"key":"e_1_3_2_3_2","unstructured":"Janis Klaise Arnaud Van Looveren Giovanni Vacanti and Alexandru Coca. 2021. Alibi Explain: Algorithms for explaining machine learning models. Journal of Machine Learning Research 22 181 (2021) 1\u20137. Retrieved from http:\/\/jmlr.org\/papers\/v22\/21-0017.html"},{"key":"e_1_3_2_4_2","unstructured":"Ramaravind K. Mothilal Amit Sharma and Chenhao Tan. 2020. DICE. Retrieved from https:\/\/github.com\/interpretml\/DiCE."},{"key":"e_1_3_2_5_2","unstructured":"Kiana Alikhademi Brianna Richardson Emma Drobina and Juan E. Gilbert. 2021. Can Explainable AI Explain Unfairness? A Framework for Evaluating Explainable AI. Retrieved from https:\/\/arXiv:2106.07483."},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1155\/2023\/4459198"},{"key":"e_1_3_2_7_2","unstructured":"Andr\u00e9 Artelt and Barbara Hammer. 2019. On the computation of counterfactual explanations\u2014A survey."},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372830"},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.3389\/fdata.2021.688969"},{"key":"e_1_3_2_10_2","first-page":"8","volume-title":"Proceedings of the 17th International Joint Conference on Artificial Intelligence\u2014Workshop on Explainable AI","author":"Biran Or","year":"2017","unstructured":"Or Biran and Courtenay Cotton. 2017. Explanation and justification in machine learning: A survey. In Proceedings of the 17th International Joint Conference on Artificial Intelligence\u2014Workshop on Explainable AI. 8\u201313."},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","unstructured":"Francesco Bodria Fosca Giannotti Riccardo Guidotti Francesca Naretto Dino Pedreschi and Salvatore Rinzivillo. 2023. Benchmarking and survey of explanation methods for black box models. Data Mining and Knowledge Discovery 37 5 (2023) 1719\u20131778. 10.1007\/s10618-023-00933-9","DOI":"10.1007\/s10618-023-00933-9"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.5555\/1211799"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1145\/3531146.3533153"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1613\/jair.1.12228"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2019\/876"},{"key":"e_1_3_2_16_2","unstructured":"Jiawei Chen Hande Dong Xiang Wang Fuli Feng Meng Wang and Xiangnan He. 2021. Bias and Debias in Recommender System: A Survey and Future Directions. Retrieved from https:\/\/arxiv:2010.03240."},{"key":"e_1_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2021.11.003"},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58112-1_31"},{"key":"e_1_3_2_19_2","unstructured":"Arun Das and Paul Rad. 2020. Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. Retrieved from https:\/\/arxiv:2006.11371."},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.3390\/app11167274"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2023.119898"},{"key":"e_1_3_2_22_2","volume-title":"Proceedings of the 32nd International Conference on Neural Information Processing Systems","author":"Dhurandhar Amit","year":"2018","unstructured":"Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das. 2018. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. In Proceedings of the 32nd International Conference on Neural Information Processing Systems."},{"key":"e_1_3_2_23_2","volume-title":"The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World","author":"Domingos Pedro","year":"2017","unstructured":"Pedro Domingos. 2017. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Penguin."},{"key":"e_1_3_2_24_2","first-page":"1","volume-title":"Proceedings of the International Conference on Machine Learning (ICML\u201920)","volume":"2020","author":"Downs Michael","year":"2020","unstructured":"Michael Downs, Jonathan L. Chu, Yaniv Yacoby, Finale Doshi-Velez, and Weiwei Pan. 2020. CRUDS: Counterfactual recourse using disentangled subspaces. In Proceedings of the International Conference on Machine Learning (ICML\u201920), Vol. 2020. 1\u201323."},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-33607-3_10"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1145\/3336191.3371824"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1609\/aimag.v38i3.2741"},{"key":"e_1_3_2_28_2","volume-title":"Proceedings of the 32nd Annual Conference on Neural Information Processing Systems (NeurIPS\u201918)","author":"Grath Rory Mc","year":"2018","unstructured":"Rory Mc Grath, Luca Costabello, Chan Le Van, Paul Sweeney, Farbod Kamiab, Zhao Shen, and Freddy Lecue. 2018. Interpretable credit application predictions with counterfactual explanations. In Proceedings of the 32nd Annual Conference on Neural Information Processing Systems (NeurIPS\u201918)."},{"key":"e_1_3_2_29_2","first-page":"507","article-title":"Why do tree-based models still outperform deep learning on typical tabular data?","volume":"35","author":"Grinsztajn L\u00e9o","year":"2022","unstructured":"L\u00e9o Grinsztajn, Edouard Oyallon, and Ga\u00ebl Varoquaux. 2022. Why do tree-based models still outperform deep learning on typical tabular data? Advances in Neural Information Processing Systems 35 (2022), 507\u2013520.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","unstructured":"Riccardo Guidotti. 2022. Counterfactual explanations and how to find them: literature review and benchmarking. Data Mining and Knowledge Discovery (2022) 1\u201355. 10.1007\/s10618-022-00831-6","DOI":"10.1007\/s10618-022-00831-6"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/MIS.2019.2957223"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-46150-8_12"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3236009"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1609\/aimag.v40i2.2850"},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-93736-2_33"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2021.10.007"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1002\/widm.1312"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2021.01.008"},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICPM53251.2021.9576881"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-023-41463-0"},{"key":"e_1_3_2_41_2","first-page":"265","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS\u201920)","author":"Karimi Amir-Hossein","year":"2020","unstructured":"Amir-Hossein Karimi, Bodo Julius von K\u00fcgelgen, Bernhard Sch\u00f6lkopf, and Isabel Valera. 2020. Algorithmic recourse under imperfect causal knowledge: A probabilistic approach. In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS\u201920). 265\u2013277."},{"key":"e_1_3_2_42_2","first-page":"895","volume-title":"Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS\u201920)","author":"Karimi Amir-Hossein","year":"2020","unstructured":"Amir-Hossein Karimi, Gilles Barthe, Borja Balle, and Isabel Valera. 2020. Model-agnostic counterfactual explanations for consequential decisions. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS\u201920). 895\u2013905."},{"key":"e_1_3_2_43_2","unstructured":"Amir-Hossein Karimi Gilles Barthe Bernhard Sch\u00f6lkopf and Isabel Valera. 2021. A survey of algorithmic recourse: definitions formulations solutions and prospects. Retrieved from https:\/\/arxiv:2010.04050."},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2021\/609"},{"key":"e_1_3_2_45_2","first-page":"163","volume-title":"Case-Based Reasoning Research and Development","author":"Keane Mark T.","year":"2020","unstructured":"Mark T. Keane and Barry Smyth. 2020. Good counterfactuals and where to find them: A case-based technique for generating counterfactuals for explainable AI (XAI). In Case-Based Reasoning Research and Development. Springer, Berlin, 163\u2013178."},{"key":"e_1_3_2_46_2","volume-title":"Proceedings of the 29th Advances in Neural Information Processing Systems","author":"Kim Been","year":"2016","unstructured":"Been Kim, Rajiv Khanna, and Oluwasanmi O. Koyejo. 2016. Examples are not enough, learn to criticize! criticism for interpretability. In Proceedings of the 29th Advances in Neural Information Processing Systems."},{"key":"e_1_3_2_47_2","volume-title":"Proceedings of the International Conference on Machine Lerning: Workshop on Algorithmic Recourse (ICML\u201921)","author":"Kirfel Lara","year":"2021","unstructured":"Lara Kirfel and Alice Liefgreen. 2021. What if (and how...)? Actionability shapes people\u2019s perceptions of counterfactual explanations in automated decision-making. In Proceedings of the International Conference on Machine Lerning: Workshop on Algorithmic Recourse (ICML\u201921)."},{"key":"e_1_3_2_48_2","doi-asserted-by":"crossref","unstructured":"Gary Klein Mohammadreza Jalaeian Robert Hoffman and Shane T. Mueller. 2021. The Plausibility Gap: A model of sensemaking. Retrieved from https:\/\/osf.io\/preprints\/psyarxiv\/rpw6e.","DOI":"10.31234\/osf.io\/rpw6e"},{"key":"e_1_3_2_49_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-91473-2_9"},{"key":"e_1_3_2_50_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2019\/388"},{"key":"e_1_3_2_51_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-46147-8_3"},{"key":"e_1_3_2_52_2","doi-asserted-by":"publisher","DOI":"10.1609\/aimag.v40i3.2866"},{"key":"e_1_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.2307\/2025310"},{"key":"e_1_3_2_54_2","volume-title":"Counterfactuals","author":"Lewis David","year":"1973","unstructured":"David Lewis. 1973. Counterfactuals. Blackwell, Oxford, UK."},{"key":"e_1_3_2_55_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2024.102301"},{"key":"e_1_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-86520-7_40"},{"key":"e_1_3_2_57_2","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372824"},{"key":"e_1_3_2_58_2","first-page":"4765","volume-title":"Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS\u201917)","author":"Lundberg Scott","year":"2017","unstructured":"Scott Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS\u201917). 4765\u20134774."},{"key":"e_1_3_2_59_2","doi-asserted-by":"publisher","DOI":"10.1007\/BF02834632"},{"key":"e_1_3_2_60_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2018.07.007"},{"key":"e_1_3_2_61_2","volume-title":"Interpretable Machine Learning: A Guide for Making Black Box Models Explainable","author":"Molnar Christoph","year":"2020","unstructured":"Christoph Molnar. 2020. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Lulu. com."},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.dss.2021.113561"},{"key":"e_1_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.1145\/3461702.3462597"},{"key":"e_1_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372850"},{"key":"e_1_3_2_65_2","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.1900654116"},{"key":"e_1_3_2_66_2","doi-asserted-by":"crossref","unstructured":"Jos\u00e9 Neves Chihcheng Hsieh Isabel Blanco Nobre Sandra Costa Sousa Chun Ouyang Anderson Maciel Andrew Duchowski Joaquim Jorge and Catarina Moreira. 2024. Shedding light on ai in radiology: A systematic review and taxonomy of eye gaze-driven interpretability in deep learning. European Journal of Radiology (2024) 111341.","DOI":"10.1016\/j.ejrad.2024.111341"},{"key":"e_1_3_2_67_2","volume-title":"Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS\u201921) Track on Datasets and Benchmarks","author":"Pawelczyk Martin","year":"2021","unstructured":"Martin Pawelczyk, Sascha Bielawski, Johannes van den Heuvel, Tobias Richter, and Gjergji Kasneci. 2021. CARLA\u2014Counterfactual and recourse library. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS\u201921) Track on Datasets and Benchmarks."},{"key":"e_1_3_2_68_2","doi-asserted-by":"publisher","DOI":"10.1145\/3366423.3380087"},{"key":"e_1_3_2_69_2","doi-asserted-by":"publisher","DOI":"10.5555\/1642718"},{"key":"e_1_3_2_70_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-39630-5_14"},{"key":"e_1_3_2_71_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-32722-4_5"},{"key":"e_1_3_2_72_2","doi-asserted-by":"publisher","DOI":"10.3390\/a13010017"},{"key":"e_1_3_2_73_2","doi-asserted-by":"publisher","DOI":"10.1145\/3375627.3375850"},{"key":"e_1_3_2_74_2","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-020-0197-y"},{"key":"e_1_3_2_75_2","first-page":"275","volume-title":"Proceedings of IEEE Symposium on Computer-based Medical Systems (CBMS\u201919)","author":"Sherif Mouaz Al-Mallah Radwa Elshawi, Youssef","year":"2019","unstructured":"Mouaz Al-Mallah Radwa Elshawi, Youssef Sherif and Sherif Sakr. 2019. Interpretability in healthcare a comparative study of local machine learning interpretability techniques. In Proceedings of IEEE Symposium on Computer-based Medical Systems (CBMS\u201919). 275\u2013280."},{"key":"e_1_3_2_76_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11634-020-00418-3"},{"key":"e_1_3_2_77_2","unstructured":"Shubham Rathi. 2019. Generating Counterfactual and Contrastive Explanations using SHAP. Retrieved from https:\/\/arxiv:1906.09293."},{"key":"e_1_3_2_78_2","volume-title":"Proceedings of the 34th International Conference on Neural Information Processing Systems (NeurIPS\u201920)","volume":"33","author":"Rawal Kaivalya","year":"2020","unstructured":"Kaivalya Rawal and Himabindu Lakkaraju. 2020. Beyond individualized recourse: Interpretable and interactive summaries of actionable recourses. In Proceedings of the 34th International Conference on Neural Information Processing Systems (NeurIPS\u201920), Vol. 33."},{"key":"e_1_3_2_79_2","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_2_80_2","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0048-x"},{"key":"e_1_3_2_81_2","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287569"},{"key":"e_1_3_2_82_2","volume-title":"Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society","author":"Sharma Shubham","year":"2020","unstructured":"Shubham Sharma, Jette Henderson, and Joydeep Ghosh. 2020. CERTIFAI: Counterfactual explanations for robustness, transparency, interpretability, and fairness of artificial intelligence models. In Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society."},{"key":"e_1_3_2_83_2","first-page":"3145","volume-title":"Proceedings of the 34th International Conference on Machine Learning (ICML\u201917)","author":"Shrikumar Avanti","year":"2017","unstructured":"Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning (ICML\u201917). 3145\u20133153."},{"key":"e_1_3_2_84_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58666-9_15"},{"key":"e_1_3_2_85_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-65310-1_31"},{"key":"e_1_3_2_86_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2021.3051315"},{"key":"e_1_3_2_87_2","first-page":"3319","volume-title":"Proceedings of the 34th International Conference on Machine Learning (ICML\u201917)","author":"Sundararajan Mukund","year":"2017","unstructured":"Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning (ICML\u201917). 3319\u20133328."},{"key":"e_1_3_2_88_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-79108-7_8"},{"key":"e_1_3_2_89_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-91431-8_4"},{"key":"e_1_3_2_90_2","doi-asserted-by":"publisher","unstructured":"Mythreyi Velmurugan Chun Ouyang Renuka Sindhgatta and Catarina Moreira. 2023. Through the looking glass: evaluating post hoc explanations using transparent models. International Journal of Data Science and Analytics (2023) 1\u201321. 10.1007\/s41060-023-00445-1","DOI":"10.1007\/s41060-023-00445-1"},{"key":"e_1_3_2_91_2","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372876"},{"key":"e_1_3_2_92_2","unstructured":"Sahil Verma John Dickerson and Keegan Hines. 2020. Counterfactual Explanations for Machine Learning: A Review. Retrieved from https:\/\/arxiv:2010.10596."},{"key":"e_1_3_2_93_2","volume-title":"Proceedings of the International Conference on Information Systems (ICIS\u201920)","author":"Wanner Jonas","year":"2020","unstructured":"Jonas Wanner, Lukas-Valentin Herm, Kai Heinrich, Christian Janiesch, and Patrick Zschech. 2020. White, grey, black: Effects of XAI augmentation on the confidence in AI-based decision support systems. In Proceedings of the International Conference on Information Systems (ICIS\u201920)."},{"key":"e_1_3_2_94_2","first-page":"841","article-title":"Counterfactual explanations without opening the black box: Automated decisions and the GDPR","volume":"31","author":"Watcher Sandra","year":"2018","unstructured":"Sandra Watcher, Brent Mittelstadt, and Chris Russell. 2018. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard J. Law Technol. 31 (2018), 841.","journal-title":"Harvard J. Law Technol."},{"key":"e_1_3_2_95_2","volume-title":"Proceedings of the 24th European Conference on Artificial Intelligence (ECAI\u201920)","author":"White Adam","year":"2020","unstructured":"Adam White and Artur d\u2019Avila Garcez. 2020. Measurable counterfactual local explanations for any classifier. In Proceedings of the 24th European Conference on Artificial Intelligence (ECAI\u201920)."}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3672553","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3672553","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:06:14Z","timestamp":1750291574000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3672553"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,2,10]]},"references-count":94,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,6,30]]}},"alternative-id":["10.1145\/3672553"],"URL":"https:\/\/doi.org\/10.1145\/3672553","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,2,10]]},"assertion":[{"value":"2022-09-14","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-06-06","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-02-10","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}