{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,22]],"date-time":"2026-04-22T02:02:21Z","timestamp":1776823341787,"version":"3.51.2"},"reference-count":108,"publisher":"Association for Computing Machinery (ACM)","issue":"CSCW2","license":[{"start":{"date-parts":[[2023,9,28]],"date-time":"2023-09-28T00:00:00Z","timestamp":1695859200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100006374","name":"National Science Foundation","doi-asserted-by":"publisher","award":["IIS-2126602"],"award-info":[{"award-number":["IIS-2126602"]}],"id":[{"id":"10.13039\/501100006374","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Hum.-Comput. Interact."],"published-print":{"date-parts":[[2023,9,28]]},"abstract":"<jats:p>While a vast collection of explainable AI (XAI) algorithms has been developed in recent years, they have been criticized for significant gaps with how humans produce and consume explanations. As a result, current XAI techniques are often found to be hard to use and lack effectiveness. In this work, we attempt to close these gaps by making AI explanations selective ---a fundamental property of human explanations---by selectively presenting a subset of model reasoning based on what aligns with the recipient's preferences. We propose a general framework for generating selective explanations by leveraging human input on a small dataset. This framework opens up a rich design space that accounts for different selectivity goals, types of input, and more. As a showcase, we use a decision-support task to explore selective explanations based on what the decision-maker would consider relevant to the decision task. We conducted two experimental studies to examine three paradigms based on our proposed framework: in Study 1, we ask the participants to provide critique-based or open-ended input to generate selective explanations (self-input). In Study 2, we show the participants selective explanations based on input from a panel of similar users (annotator input). Our experiments demonstrate the promise of selective explanations in reducing over-reliance on AI and improving collaborative decision making and subjective perceptions of the AI system, but also paint a nuanced picture that attributes some of these positive effects to the opportunity to provide one's own input to augment AI explanations. Overall, our work proposes a novel XAI framework inspired by human communication behaviors and demonstrates its potential to encourage future work to make AI explanations more human-compatible.<\/jats:p>","DOI":"10.1145\/3610206","type":"journal-article","created":{"date-parts":[[2023,10,4]],"date-time":"2023-10-04T15:54:10Z","timestamp":1696434850000},"page":"1-35","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":43,"title":["Selective Explanations: Leveraging Human Input to Align Explainable AI"],"prefix":"10.1145","volume":"7","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3165-6136","authenticated-orcid":false,"given":"Vivian","family":"Lai","sequence":"first","affiliation":[{"name":"University of Colorado Boulder, Boulder, CO, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-2448-5846","authenticated-orcid":false,"given":"Yiming","family":"Zhang","sequence":"additional","affiliation":[{"name":"University of Chicago, Chicago, IL, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-6101-2150","authenticated-orcid":false,"given":"Chacha","family":"Chen","sequence":"additional","affiliation":[{"name":"University of Chicago, Chicago, IL, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4543-7196","authenticated-orcid":false,"given":"Q. Vera","family":"Liao","sequence":"additional","affiliation":[{"name":"Microsoft Research, Montreal, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3981-2116","authenticated-orcid":false,"given":"Chenhao","family":"Tan","sequence":"additional","affiliation":[{"name":"University of Chicago, Chicago, IL, USA"}]}],"member":"320","published-online":{"date-parts":[[2023,10,4]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14","author":"Abdul Ashraf","unstructured":"Ashraf Abdul, Christian von der Weth, Mohan Kankanhalli, and Brian Y Lim. 2020. COGAM: measuring and moderating cognitive load in machine learning model explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14."},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2870052"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1609\/hcomp.v9i1.18938"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445736"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2019.12.012"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445717"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372830"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3173951"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2017\/202"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3501965"},{"key":"e_1_2_1_11_1","volume-title":"Proceedings of the 25th International Conference on Intelligent User Interfaces. 454--464","author":"Zana Bucc","year":"2020","unstructured":"Zana Bucc inca, Phoebe Lin, Krzysztof Z Gajos, and Elena L Glassman. 2020. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 454--464."},{"key":"e_1_2_1_12_1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"5","author":"Zana Bucc","year":"2021","unstructured":"Zana Bucc inca, Maja Barbara Malaya, and Krzysztof Z Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (2021), 1--21."},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICHI.2015.26"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3301275.3302289"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300234"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.findings-acl.86"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1609\/icwsm.v14i1.7282"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.3390\/electronics8080832"},{"key":"e_1_2_1_19_1","volume-title":"Machine Explanations and Human Understanding. arXiv preprint arXiv:2202.04092","author":"Chen Chacha","year":"2022","unstructured":"Chacha Chen, Shi Feng, Amit Sharma, and Chenhao Tan. 2022a. Machine Explanations and Human Understanding. arXiv preprint arXiv:2202.04092 (2022)."},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3511299"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300789"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1006\/ijhc.1994.1083"},{"key":"e_1_2_1_23_1","volume-title":"Proceedings of NAACL.","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL."},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1037\/xge0000033"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1287\/mnsc.2016.2643"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3301275.3302310"},{"key":"e_1_2_1_27_1","volume-title":"Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608","author":"Doshi-Velez Finale","year":"2017","unstructured":"Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)."},{"key":"e_1_2_1_28_1","unstructured":"Upol Ehsan Samir Passi Q Vera Liao Larry Chan I Lee Michael Muller Mark O Riedl et al. 2021a. The who in explainable ai: How ai background shapes perceptions of ai explanations. arXiv preprint arXiv:2107.13509 (2021)."},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-60117-1_33"},{"key":"e_1_2_1_30_1","volume-title":"Explainability pitfalls: Beyond dark patterns in explainable AI. arXiv preprint arXiv:2109.12480","author":"Ehsan Upol","year":"2021","unstructured":"Upol Ehsan and Mark O Riedl. 2021. Explainability pitfalls: Beyond dark patterns in explainable AI. arXiv preprint arXiv:2109.12480 (2021)."},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3301275.3302316"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411763.3441342"},{"key":"e_1_2_1_33_1","unstructured":"Shi Feng and Jordan Boyd-Graber. [n. d.]. Learning to Explain Selectively: A Case Study on Question Answering. In Empirical Methods in Natural Language Processing."},{"key":"e_1_2_1_34_1","volume-title":"Carlos Scheidegger, and Dylan Slack.","author":"Friedler Sorelle A","year":"2019","unstructured":"Sorelle A Friedler, Chitradeep Dutta Roy, Carlos Scheidegger, and Dylan Slack. 2019. Assessing the local interpretability of machine learning models. arXiv preprint arXiv:1902.03501 (2019)."},{"key":"e_1_2_1_35_1","volume-title":"Impact of AI Assistance on Incidental Learning. In 27th International Conference on Intelligent User Interfaces. 794--806","author":"Gajos Krzysztof Z","year":"2022","unstructured":"Krzysztof Z Gajos and Lena Mamykina. 2022. Do People Engage Cognitively with AI? Impact of AI Assistance on Incidental Learning. In 27th International Conference on Intelligent User Interfaces. 794--806."},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376316"},{"key":"e_1_2_1_37_1","volume-title":"Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience. arXiv preprint arXiv:2001.09219","author":"Ghai Bhavya","year":"2020","unstructured":"Bhavya Ghai, Q Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2020. Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience. arXiv preprint arXiv:2001.09219 (2020)."},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/DSAA.2018.00018"},{"key":"e_1_2_1_39_1","volume-title":"Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA. arXiv preprint arXiv:2012.15075","author":"Gonzalez Ana Valeria","year":"2020","unstructured":"Ana Valeria Gonzalez, Gagan Bansal, Angela Fan, Robin Jia, Yashar Mehdad, and Srinivasan Iyer. 2020. Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA. arXiv preprint arXiv:2012.15075 (2020)."},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287563"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359152"},{"key":"e_1_2_1_42_1","volume-title":"A survey of methods for explaining black box models. ACM computing surveys (CSUR)","author":"Guidotti Riccardo","year":"2019","unstructured":"Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2019. A survey of methods for explaining black box models. ACM computing surveys (CSUR), Vol. 51, 5 (2019), 93."},{"key":"e_1_2_1_43_1","volume-title":"Explainable artificial intelligence (xai). Defense advanced research projects agency (DARPA), nd Web","author":"Gunning David","year":"2017","unstructured":"David Gunning. 2017. Explainable artificial intelligence (xai). Defense advanced research projects agency (DARPA), nd Web, Vol. 2, 2 (2017), 1."},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1037\/e577632012-009"},{"key":"e_1_2_1_45_1","volume-title":"The problem of causal selection. Contemporary science and natural explanation: Commonsense conceptions of causality","author":"Hesslow Germund","year":"1988","unstructured":"Germund Hesslow. 1988. The problem of causal selection. Contemporary science and natural explanation: Commonsense conceptions of causality (1988), 11--32."},{"key":"e_1_2_1_46_1","doi-asserted-by":"crossref","unstructured":"Denis Hilton. 2017. Social attribution and explanation. (2017).","DOI":"10.1093\/oxfordhb\/9780199399550.013.33"},{"key":"e_1_2_1_47_1","volume-title":"The psychology of counterfactual thinking","author":"Hilton Denis J","unstructured":"Denis J Hilton and L McCLURE John. 2007. The course of events: counterfactuals, causal sequences, and explanation. In The psychology of counterfactual thinking. Routledge, 56--72."},{"key":"e_1_2_1_48_1","volume-title":"Knowledge-based causal attribution: The abnormal conditions focus model. Psychological review","author":"Hilton Denis J","year":"1986","unstructured":"Denis J Hilton and Ben R Slugoski. 1986. Knowledge-based causal attribution: The abnormal conditions focus model. Psychological review, Vol. 93, 1 (1986), 75."},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300809"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445385"},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445899"},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376219"},{"key":"e_1_2_1_53_1","volume-title":"International conference on machine learning. PMLR, 2668--2677","author":"Kim Been","year":"2018","unstructured":"Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning. PMLR, 2668--2677."},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300641"},{"key":"e_1_2_1_55_1","volume-title":"An evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1902.00006","author":"Lage Isaac","year":"2019","unstructured":"Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Sam Gershman, and Finale Doshi-Velez. 2019. An evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1902.00006 (2019)."},{"key":"e_1_2_1_56_1","volume-title":"Towards a science of human-ai decision making: a survey of empirical studies. arXiv preprint arXiv:2112.11471","author":"Lai Vivian","year":"2021","unstructured":"Vivian Lai, Chacha Chen, Q Vera Liao, Alison Smith-Renner, and Chenhao Tan. 2021. Towards a science of human-ai decision making: a survey of empirical studies. arXiv preprint arXiv:2112.11471 (2021)."},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287590"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939874"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445522"},{"key":"e_1_2_1_60_1","volume-title":"Questioning the AI: Informing Design Practices for Explainable AI User Experiences. arXiv preprint arXiv:2001.02478","author":"Liao Q Vera","year":"2020","unstructured":"Q Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. arXiv preprint arXiv:2001.02478 (2020)."},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1145\/3531146.3533182"},{"key":"e_1_2_1_62_1","volume-title":"Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv preprint arXiv:2110.10790","author":"Vera Liao Q","year":"2021","unstructured":"Q Vera Liao and Kush R Varshney. 2021. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv preprint arXiv:2110.10790 (2021)."},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1609\/hcomp.v10i1.21995"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1145\/1518701.1519023"},{"key":"e_1_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1017\/S1358246100005130"},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1145\/3236386.3241340"},{"key":"e_1_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1145\/3479552"},{"key":"e_1_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372824"},{"key":"e_1_2_1_69_1","volume-title":"A unified approach to interpreting model predictions. Advances in neural information processing systems","author":"Lundberg Scott M","year":"2017","unstructured":"Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems, Vol. 30 (2017)."},{"key":"e_1_2_1_70_1","volume-title":"Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics : Human Language Technologies. Association for Computational Linguistics","author":"Maas Andrew L.","year":"2011","unstructured":"Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning Word Vectors for Sentiment Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics : Human Language Technologies. Association for Computational Linguistics, Portland, Oregon, USA, 142--150."},{"key":"e_1_2_1_71_1","volume-title":"How the mind explains behavior: Folk explanations, meaning, and social interaction","author":"Malle Bertram F","unstructured":"Bertram F Malle. 2006. How the mind explains behavior: Folk explanations, meaning, and social interaction. MIT press."},{"key":"e_1_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1111\/j.2044-8309.1997.tb01129.x"},{"key":"e_1_2_1_73_1","doi-asserted-by":"publisher","unstructured":"Tomas Mikolov Kai Chen Greg Corrado and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. https:\/\/doi.org\/10.48550\/arXiv.1301.3781 arxiv: 1301.3781 [cs]","DOI":"10.48550\/arXiv.1301.3781"},{"key":"e_1_2_1_74_1","volume-title":"Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence","author":"Miller Tim","year":"2019","unstructured":"Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, Vol. 267 (2019), 1--38."},{"key":"e_1_2_1_75_1","volume-title":"How do humans understand explanations from machine learning systems? an evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1802.00682","author":"Narayanan Menaka","year":"2018","unstructured":"Menaka Narayanan, Emily Chen, Jeffrey He, Been Kim, Sam Gershman, and Finale Doshi-Velez. 2018. How do humans understand explanations from machine learning systems? an evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1802.00682 (2018)."},{"key":"e_1_2_1_76_1","doi-asserted-by":"publisher","DOI":"10.1145\/3397481.3450658"},{"key":"e_1_2_1_77_1","doi-asserted-by":"crossref","unstructured":"Bo Pang Lillian Lee et al. 2008. Opinion mining and sentiment analysis. Foundations and Trends\u00ae in information retrieval Vol. 2 1--2 (2008) 1--135.","DOI":"10.1561\/1500000011"},{"key":"e_1_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3502104"},{"key":"e_1_2_1_79_1","doi-asserted-by":"publisher","DOI":"10.5555\/1953048.2078195"},{"key":"e_1_2_1_80_1","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/D14--1162"},{"key":"e_1_2_1_81_1","volume-title":"Jennifer Wortman Vaughan, and Hanna Wallach","author":"Poursabzi-Sangdeh Forough","year":"2018","unstructured":"Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. 2018. Manipulating and measuring model interpretability. arXiv preprint arXiv:1802.07810 (2018)."},{"key":"e_1_2_1_82_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445315"},{"key":"e_1_2_1_83_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_2_1_84_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11491"},{"key":"e_1_2_1_85_1","doi-asserted-by":"publisher","DOI":"10.1145\/3397481.3450699"},{"key":"e_1_2_1_86_1","volume-title":"Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence","author":"Rudin Cynthia","year":"2019","unstructured":"Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, Vol. 1, 5 (2019), 206--215."},{"key":"e_1_2_1_87_1","volume-title":"Talktomodel: Understanding machine learning models with open ended dialogues. arXiv preprint arXiv:2207.04154","author":"Slack Dylan","year":"2022","unstructured":"Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju, and Sameer Singh. 2022. Talktomodel: Understanding machine learning models with open ended dialogues. arXiv preprint arXiv:2207.04154 (2022)."},{"key":"e_1_2_1_88_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376624"},{"key":"e_1_2_1_89_1","doi-asserted-by":"publisher","DOI":"10.1145\/3301275.3302322"},{"key":"e_1_2_1_90_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijhcs.2009.03.004"},{"key":"e_1_2_1_91_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445088"},{"key":"e_1_2_1_92_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445101"},{"key":"e_1_2_1_93_1","volume-title":"Explanations Can Reduce Overreliance on AI Systems During Decision-Making. arXiv preprint arXiv:2212.06823","author":"Vasconcelos Helena","year":"2022","unstructured":"Helena Vasconcelos, Matthew J\u00f6rke, Madeleine Grunde-McLaughlin, Tobias Gerstenberg, Michael Bernstein, and Ranjay Krishna. 2022. Explanations Can Reduce Overreliance on AI Systems During Decision-Making. arXiv preprint arXiv:2212.06823 (2022)."},{"key":"e_1_2_1_94_1","volume-title":"Counterfactual explanations for machine learning: A review. arXiv preprint arXiv:2010.10596","author":"Verma Sahil","year":"2020","unstructured":"Sahil Verma, John Dickerson, and Keegan Hines. 2020. Counterfactual explanations for machine learning: A review. arXiv preprint arXiv:2010.10596 (2020)."},{"key":"e_1_2_1_95_1","doi-asserted-by":"crossref","unstructured":"Sandra Wachter Brent Mittelstadt and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR.","DOI":"10.2139\/ssrn.3063289"},{"key":"e_1_2_1_96_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359313"},{"key":"e_1_2_1_97_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300831"},{"key":"e_1_2_1_98_1","doi-asserted-by":"publisher","DOI":"10.1145\/3397481.3450650"},{"key":"e_1_2_1_99_1","volume-title":"A Human-Grounded Evaluation of SHAP for Alert Processing. arXiv preprint arXiv:1907.03324","author":"Weerts Hilde JP","year":"2019","unstructured":"Hilde JP Weerts, Werner van Ipenburg, and Mykola Pechenizkiy. 2019. A Human-Grounded Evaluation of SHAP for Alert Processing. arXiv preprint arXiv:1907.03324 (2019)."},{"key":"e_1_2_1_100_1","doi-asserted-by":"publisher","DOI":"10.1215\/00318108-2005-001"},{"key":"e_1_2_1_101_1","volume-title":"Machines We Trust: Perspectives on Dependable AI","author":"Vaughan Jennifer Wortman","unstructured":"Jennifer Wortman Vaughan and Hanna Wallach. 2021. A Human-Centered Agenda for Intelligible Machine Learning. In Machines We Trust: Perspectives on Dependable AI, Marcello Pelillo and Teresa Scantamburlo (Eds.). MIT Press."},{"key":"e_1_2_1_102_1","volume-title":"Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--13","author":"Xie Yao","unstructured":"Yao Xie, Melody Chen, David Kao, Ge Gao, and Xiang `Anthony' Chen. 2020. CheXplain: Enabling Physicians to Explore and Understand Data-Driven, AI-Enabled Medical Imaging Analysis. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--13."},{"key":"e_1_2_1_103_1","doi-asserted-by":"publisher","DOI":"10.1145\/3377325.3377480"},{"key":"e_1_2_1_104_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376301"},{"key":"e_1_2_1_105_1","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.1906.08237"},{"key":"e_1_2_1_106_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300509"},{"key":"e_1_2_1_107_1","volume-title":"Towards Relatable Explainable AI with the Perceptual Process. In CHI Conference on Human Factors in Computing Systems. 1--24","author":"Zhang Wencan","year":"2022","unstructured":"Wencan Zhang and Brian Y Lim. 2022. Towards Relatable Explainable AI with the Perceptual Process. In CHI Conference on Human Factors in Computing Systems. 1--24."},{"key":"e_1_2_1_108_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372852"}],"container-title":["Proceedings of the ACM on Human-Computer Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3610206","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3610206","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,21]],"date-time":"2025-08-21T04:23:05Z","timestamp":1755750185000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3610206"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,28]]},"references-count":108,"journal-issue":{"issue":"CSCW2","published-print":{"date-parts":[[2023,9,28]]}},"alternative-id":["10.1145\/3610206"],"URL":"https:\/\/doi.org\/10.1145\/3610206","relation":{},"ISSN":["2573-0142"],"issn-type":[{"value":"2573-0142","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,9,28]]},"assertion":[{"value":"2023-10-04","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}