{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T18:41:34Z","timestamp":1776105694068,"version":"3.50.1"},"reference-count":259,"publisher":"Association for Computing Machinery (ACM)","issue":"6","funder":[{"name":"Science Foundation Ireland","award":["13\/RC\/2094_P2"],"award-info":[{"award-number":["13\/RC\/2094_P2"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Comput.-Hum. Interact."],"published-print":{"date-parts":[[2025,12,31]]},"abstract":"<jats:p>As AI systems become increasingly integrated into our lives, the need to support appropriate human understanding of AI continues to grow. With new AI capabilities being deployed in different contexts, human-centered explainability is crucial to ensure people can interact with novel AI systems safely and effectively. To address evolving explainability needs, the field of Explainable AI (XAI) has produced numerous frameworks. But what do these frameworks entail and how can they be used in practice? What drives their development? As AI systems continue to grow in complexity, it is important to understand and reflect upon the value of these frameworks and their potential to address upcoming human-centered needs for XAI. Towards this, we performed a scoping review following the PRISMA-ScR procedure, gathering and analyzing a corpus of 73 papers to understand how XAI frameworks can support different stages of human-centered XAI design. We present a unified model and a set of guiding questions to help identify, compare and select relevant XAI frameworks across various design stages, making it easier for designers and researchers to apply human-centered approaches in real-world XAI contexts. We also analyze how frameworks are developed and evaluated, highlighting gaps and opportunities to improve both methodological as well as existing HCXAI practices.<\/jats:p>","DOI":"10.1145\/3769678","type":"journal-article","created":{"date-parts":[[2025,9,29]],"date-time":"2025-09-29T15:52:09Z","timestamp":1759161129000},"page":"1-79","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Designing, Implementing, and Evaluating AI Explanations: A Scoping Review of Explainable AI Frameworks"],"prefix":"10.1145","volume":"32","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9534-7047","authenticated-orcid":false,"given":"Karina","family":"Corti\u00f1as-Lorenzo","sequence":"first","affiliation":[{"name":"School of Computer Science, Trinity College Dublin, Dublin, Ireland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8506-3825","authenticated-orcid":false,"given":"Wanling","family":"Cai","sequence":"additional","affiliation":[{"name":"Trinity College Dublin &amp; Lero, Dublin, Ireland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9617-7008","authenticated-orcid":false,"given":"Gavin","family":"Doherty","sequence":"additional","affiliation":[{"name":"School of Computer Science, Trinity College Dublin, Dublin, Ireland"}]}],"member":"320","published-online":{"date-parts":[[2025,12,9]]},"reference":[{"issue":"1","key":"e_1_3_1_2_2","doi-asserted-by":"crossref","first-page":"82","DOI":"10.1038\/s41746-024-01074-z","article-title":"Foundation metrics for evaluating effectiveness of healthcare conversations powered by generative AI","volume":"7","author":"Abbasian Mahyar","year":"2024","unstructured":"Mahyar Abbasian, Elahe Khatibi, Iman Azimi, David Oniani, Zahra Shakeri Hossein Abad, Alexander Thieme, Ram Sriram, Zhongqi Yang, Yanshan Wang, Bryant Lin, et al. 2024. Foundation metrics for evaluating effectiveness of healthcare conversations powered by generative AI. NPJ Digital Medicine 7, 1 (2024), 82.","journal-title":"NPJ Digital Medicine"},{"key":"e_1_3_1_3_2","first-page":"1","volume-title":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","author":"Abdul Ashraf","year":"2018","unstructured":"Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1\u201318."},{"issue":"2","key":"e_1_3_1_4_2","doi-asserted-by":"crossref","first-page":"425","DOI":"10.1108\/INTR-05-2020-0300","article-title":"Managing the tension between opposing effects of explainability of artificial intelligence: A contingency theory perspective","volume":"32","author":"Abedin Babak","year":"2022","unstructured":"Babak Abedin. 2022. Managing the tension between opposing effects of explainability of artificial intelligence: A contingency theory perspective. Internet Research 32, 2 (2022), 425\u2013453.","journal-title":"Internet Research"},{"key":"e_1_3_1_5_2","unstructured":"Bhashithe Abeysinghe and Ruhan Circi. 2024. The challenges of evaluating LLM applications: An analysis of automated human and LLM-based approaches. arXiv:2406.03339. Retrieved from https:\/\/arxiv.org\/abs\/2406.03339"},{"key":"e_1_3_1_6_2","doi-asserted-by":"crossref","first-page":"52138","DOI":"10.1109\/ACCESS.2018.2870052","article-title":"Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)","volume":"6","author":"Adadi Amina","year":"2018","unstructured":"Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6 (2018), 52138\u201352160.","journal-title":"IEEE Access"},{"key":"e_1_3_1_7_2","doi-asserted-by":"crossref","first-page":"562","DOI":"10.1145\/3529190.3535693","volume-title":"Proceedings of the 15th International Conference on Pervasive Technologies Related to Assistive Environments","author":"Adhikari Ajaya","year":"2022","unstructured":"Ajaya Adhikari, Edwin Wenink, Jasper van der Waa, Cornelis Bouter, Ioannis Tolios, and Stephan Raaijmakers. 2022. Towards FAIR explainable AI: A standardized ontology for mapping XAI solutions to use cases, explanations, and AI systems. In Proceedings of the 15th International Conference on Pervasive Technologies Related to Assistive Environments, 562\u2013568."},{"issue":"4","key":"e_1_3_1_8_2","doi-asserted-by":"crossref","first-page":"105","DOI":"10.1609\/aimag.v35i4.2513","article-title":"Power to the people: The role of humans in interactive machine learning","volume":"35","author":"Amershi Saleema","year":"2014","unstructured":"Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: The role of humans in interactive machine learning. AI Magazine 35, 4 (2014), 105\u2013120.","journal-title":"AI Magazine"},{"key":"e_1_3_1_9_2","doi-asserted-by":"crossref","first-page":"40","DOI":"10.1016\/j.ijar.2023.01.004","article-title":"Explaining black-box classifiers: Properties and functions","volume":"155","author":"Amgoud Leila","year":"2023","unstructured":"Leila Amgoud. 2023. Explaining black-box classifiers: Properties and functions. International Journal of Approximate Reasoning 155 (2023), 40\u201365.","journal-title":"International Journal of Approximate Reasoning"},{"issue":"1","key":"e_1_3_1_10_2","doi-asserted-by":"crossref","first-page":"123","DOI":"10.1109\/MCG.2021.3130314","article-title":"Visual analytics for human-centered machine learning","volume":"42","author":"Andrienko Natalia","year":"2022","unstructured":"Natalia Andrienko, Gennady Andrienko, Linara Adilova, Stefan Wrobel, and Theresa-Marie Rhyne. 2022. Visual analytics for human-centered machine learning. IEEE Computer Graphics and Applications 42, 1 (2022), 123\u2013133.","journal-title":"IEEE Computer Graphics and Applications"},{"key":"e_1_3_1_11_2","first-page":"189","volume-title":"Proceedings of Pattern Recognition. ICPR International Workshops and Challenges","author":"Apicella Andrea","year":"2021","unstructured":"Andrea Apicella, Salvatore Giugliano, Francesco Isgr\u00f2, and Roberto Prevete. 2021. A general approach to compute the relevance of middle-level input features. In Proceedings of Pattern Recognition. ICPR International Workshops and Challenges, Part III. Springer, 189\u2013203."},{"issue":"1","key":"e_1_3_1_12_2","doi-asserted-by":"crossref","first-page":"19","DOI":"10.1080\/1364557032000119616","article-title":"Scoping studies: Towards a methodological framework","volume":"8","author":"Arksey Hilary","year":"2005","unstructured":"Hilary Arksey and Lisa O\u2019Malley. 2005. Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology 8, 1 (2005), 19\u201332.","journal-title":"International Journal of Social Research Methodology"},{"key":"e_1_3_1_13_2","first-page":"101","volume-title":"Proceedings of International Work-Conference on Artificial Neural Networks","author":"Artelt Andr\u00e9","year":"2021","unstructured":"Andr\u00e9 Artelt, Fabian Hinder, Valerie Vaquet, Robert Feldhans, and Barbara Hammer. 2021. Contrastive explanations for explaining model adaptations. In Proceedings of International Work-Conference on Artificial Neural Networks. Springer, 101\u2013112."},{"key":"e_1_3_1_14_2","doi-asserted-by":"crossref","first-page":"108080","DOI":"10.1109\/ACCESS.2023.3315605","article-title":"A conceptual model framework for XAI requirement elicitation of application domain system","author":"Aslam Maria","year":"2023","unstructured":"Maria Aslam, Diana Segura-Velandia, and Yee Mey Goh. 2023. A conceptual model framework for XAI requirement elicitation of application domain system. IEEE Access 11 (2023), 108080\u2013108091.","journal-title":"IEEE Access"},{"key":"e_1_3_1_15_2","doi-asserted-by":"crossref","first-page":"55","DOI":"10.1007\/978-3-540-72037-9_4","volume-title":"Proceedings of International Conference on Pervasive Computing","author":"Assad Mark","year":"2007","unstructured":"Mark Assad, David J. Carmichael, Judy Kay, and Bob Kummerfeld. 2007. PersonisAD: Distributed, active, scrutable model framework for context-aware services. In Proceedings of International Conference on Pervasive Computing. Springer, 55\u201372."},{"issue":"4","key":"e_1_3_1_16_2","doi-asserted-by":"crossref","first-page":"496","DOI":"10.2307\/258555","article-title":"Organizational theories: Some criteria for evaluation","volume":"14","author":"Bacharach Samuel B.","year":"1989","unstructured":"Samuel B. Bacharach. 1989. Organizational theories: Some criteria for evaluation. Academy of Management Review 14, 4 (1989), 496\u2013515.","journal-title":"Academy of Management Review"},{"key":"e_1_3_1_17_2","volume-title":"Proceedings of 2021 CHI Conference on Human Factors in Computing Systems","volume":"1","author":"Bansal Gagan","year":"2021","unstructured":"Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? The effect of AI explanations on complementary team performance. In Proceedings of 2021 CHI Conference on Human Factors in Computing Systems, 1\u201316."},{"key":"e_1_3_1_18_2","doi-asserted-by":"crossref","unstructured":"Alejandro Barredo Arrieta F. Herrera R. Chatilaf R. Benjaminsh D. Molina S. Gil-Lopez Salvador Garcia A. Barbadoh S. Tabikg A. Bennetot et al. 2019. Explainable artificial intelligence (XAI): Concepts taxonomies opportunities and challenges toward responsible AI. Information Fusion 58 (2019) 82\u2013115.","DOI":"10.1016\/j.inffus.2019.12.012"},{"key":"e_1_3_1_19_2","volume-title":"The Craft of Information Visualization: Readings and Reflections","author":"Bederson Benjamin B.","year":"2003","unstructured":"Benjamin B. Bederson and Ben Shneiderman. 2003. The Craft of Information Visualization: Readings and Reflections. Morgan Kaufmann."},{"key":"e_1_3_1_20_2","doi-asserted-by":"crossref","first-page":"709","DOI":"10.1145\/1518701.1518812","volume-title":"Proceedings of SIGCHI Conference on Human Factors in Computing Systems","author":"Benford Steve","year":"2009","unstructured":"Steve Benford, Gabriella Giannachi, Boriana Koleva, and Tom Rodden. 2009. From interaction to trajectories: Designing coherent journeys through user experiences. In Proceedings of SIGCHI Conference on Human Factors in Computing Systems, 709\u2013718."},{"key":"e_1_3_1_21_2","first-page":"86","volume-title":"Proceedings of 2023 IEEE 20th International Conference on Software Architecture Companion (ICSA-C)","author":"Bersani Marcello M.","year":"2023","unstructured":"Marcello M. Bersani, Matteo Camilli, Livia Lestingi, Raffaela Mirandola, Matteo Rossi, and Patrizia Scandurra. 2023. Towards better trust in human-machine teaming through explainable dependability. In Proceedings of 2023 IEEE 20th International Conference on Software Architecture Companion (ICSA-C). IEEE, 86\u201390."},{"key":"e_1_3_1_22_2","doi-asserted-by":"crossref","first-page":"291","DOI":"10.1016\/B978-155860808-5\/50011-3","volume-title":"HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science","author":"Bertelsen Olav W.","year":"2003","unstructured":"Olav W. Bertelsen and Susanne B\u00f8dker. 2003. Activity theory. In HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science, J. Carroll (Ed.), Morgan Kaufmann, 291\u2013324."},{"key":"e_1_3_1_23_2","doi-asserted-by":"crossref","first-page":"78","DOI":"10.1145\/3514094.3534164","volume-title":"Proceedings of the 2022 AAAI\/ACM Conference on AI, Ethics, and Society","author":"Bertrand Astrid","year":"2022","unstructured":"Astrid Bertrand, Rafik Belloum, James R. Eagan, and Winston Maxwell. 2022. How cognitive biases affect XAI-assisted decision-making: A systematic review. In Proceedings of the 2022 AAAI\/ACM Conference on AI, Ethics, and Society, 78\u201391."},{"key":"e_1_3_1_24_2","doi-asserted-by":"crossref","first-page":"648","DOI":"10.1145\/3351095.3375624","volume-title":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","author":"Bhatt Umang","year":"2020","unstructured":"Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, Jos\u00e9 M. F. Moura, and Peter Eckersley. 2020. Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 648\u2013657."},{"key":"e_1_3_1_25_2","unstructured":"Rishi Bommasani Drew A. Hudson Ehsan Adeli Russ Altman Simran Arora Sydney von Arx Michael S. Bernstein Jeannette Bohg Antoine Bosselut Emma Brunskill et al. 2021. On the opportunities and risks of foundation models. arXiv:2108.07258. Retrieved from https:\/\/arxiv.org\/abs\/2108.07258"},{"key":"e_1_3_1_26_2","volume-title":"Sorting Things Out: Classification and Its Consequences","author":"Bowker Geoffrey C.","year":"2000","unstructured":"Geoffrey C. Bowker and Susan Leigh Star. 2000. Sorting Things Out: Classification and Its Consequences. MIT press."},{"key":"e_1_3_1_27_2","doi-asserted-by":"crossref","first-page":"103181","DOI":"10.1016\/j.ijhcs.2023.103181","article-title":"Exploring how politeness impacts the user experience of chatbots for mental health support","volume":"184","author":"Bowman Robert","year":"2024","unstructured":"Robert Bowman, Orla Cooney, Joseph W. Newbold, Anja Thieme, Leigh Clark, Gavin Doherty, and Benjamin Cowan. 2024. Exploring how politeness impacts the user experience of chatbots for mental health support. International Journal of Human-Computer Studies 184 (2024), 103181.","journal-title":"International Journal of Human-Computer Studies"},{"key":"e_1_3_1_28_2","first-page":"1","volume-title":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","author":"Bowman Robert","year":"2023","unstructured":"Robert Bowman, Camille Nadal, Kellie Morrissey, Anja Thieme, and Gavin Doherty. 2023. Using thematic analysis in healthcare HCI at CHI: A scoping review. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1\u201318."},{"issue":"6","key":"e_1_3_1_29_2","first-page":"84","article-title":"Design thinking","volume":"86","author":"Brown Tim","year":"2008","unstructured":"Tim Brown. 2008. Design thinking. Harvard Business Review 86, 6 (2008), 84.","journal-title":"Harvard Business Review"},{"issue":"3","key":"e_1_3_1_30_2","doi-asserted-by":"crossref","first-page":"381","DOI":"10.1111\/j.1540-5885.2011.00806.x","article-title":"Change by design","volume":"28","author":"Brown Tim","year":"2011","unstructured":"Tim Brown and Barry Katz. 2011. Change by design. Journal of Product Innovation Management 28, 3 (2011), 381\u2013383.","journal-title":"Journal of Product Innovation Management"},{"issue":"1","key":"e_1_3_1_31_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3449287","article-title":"To trust or to think: Cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making","volume":"5","author":"Bu\u00e7inca Zana","year":"2021","unstructured":"Zana Bu\u00e7inca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To trust or to think: Cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1\u201321.","journal-title":"Proceedings of the ACM on Human-Computer Interaction"},{"key":"e_1_3_1_32_2","doi-asserted-by":"crossref","first-page":"118888","DOI":"10.1016\/j.eswa.2022.118888","article-title":"Quod erat demonstrandum?\u2014Towards a typology of the concept of explanation for the design of Explainable AI","volume":"213","author":"Cabitza Federico","year":"2023","unstructured":"Federico Cabitza, Andrea Campagner, Gianclaudio Malgieri, Chiara Natali, David Schneeberger, Karl Stoeger, and Andreas Holzinger. 2023. Quod erat demonstrandum?\u2014Towards a typology of the concept of explanation for the design of Explainable AI. Expert Systems with Applications 213 (2023), 118888.","journal-title":"Expert Systems with Applications"},{"key":"e_1_3_1_33_2","volume-title":"HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science","author":"Carroll John M.","year":"2003","unstructured":"John M. Carroll. 2003. HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science. Elsevier."},{"key":"e_1_3_1_34_2","doi-asserted-by":"crossref","first-page":"163","DOI":"10.1145\/3511047.3537678","volume-title":"Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization","author":"Chambers Owen","year":"2022","unstructured":"Owen Chambers, Robin Cohen, Maura R. Grossman, and Queenie Chen. 2022. Creating a user model to support user-specific explanations of AI systems. In Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, 163\u2013166."},{"key":"e_1_3_1_35_2","first-page":"228","volume-title":"Proceedings of International Semantic Web Conference","author":"Chari Shruthi","year":"2020","unstructured":"Shruthi Chari, Oshani Seneviratne, Daniel M. Gruen, Morgan A. Foreman, Amar K. Das, and Deborah L. McGuinness. 2020. Explanation ontology: A model of explanations for user-centered AI. In Proceedings of International Semantic Web Conference, Springer, 228\u2013243."},{"key":"e_1_3_1_36_2","first-page":"307","volume-title":"Proceedings of the 26th International Conference on Intelligent User Interfaces","author":"Chromik Michael","year":"2021","unstructured":"Michael Chromik, Malin Eiband, Felicitas Buchner, Adrian Kr\u00fcger, and Andreas Butz. 2021. I think I get your point, AI! The illusion of explanatory depth in Explainable AI. In Proceedings of the 26th International Conference on Intelligent User Interfaces, 307\u2013317."},{"key":"e_1_3_1_37_2","volume-title":"Proceedings of IUI Workshops","author":"Chromik Michael","year":"2019","unstructured":"Michael Chromik, Malin Eiband, Sarah Theres V\u00f6lkel, and Daniel Buschek. 2019. Dark patterns of explainability, transparency, and user control for intelligent systems. In Proceedings of IUI Workshops, Vol. 2327."},{"key":"e_1_3_1_38_2","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1007\/978-3-030-51924-7_1","volume-title":"Proceedings of International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems","author":"Ciatto Giovanni","year":"2020","unstructured":"Giovanni Ciatto, Michael I. Schumacher, Andrea Omicini, and Davide Calvaresi. 2020. Agent-based explanations in AI: Towards an abstract framework. In Proceedings of International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems. Springer, 3\u201320."},{"issue":"8","key":"e_1_3_1_39_2","doi-asserted-by":"crossref","first-page":"608","DOI":"10.1080\/0144929X.2016.1175507","article-title":"Making HCI theory work: An analysis of the use of activity theory in HCI research","volume":"35","author":"Clemmensen Torkil","year":"2016","unstructured":"Torkil Clemmensen, Victor Kaptelinin, and Bonnie Nardi. 2016. Making HCI theory work: An analysis of the use of activity theory in HCI research. Behaviour & Information Technology 35, 8 (2016), 608\u2013627.","journal-title":"Behaviour & Information Technology"},{"key":"e_1_3_1_40_2","doi-asserted-by":"crossref","first-page":"22","DOI":"10.1145\/3568444.3568456","volume-title":"Proceedings of the 21st International Conference on Mobile and Ubiquitous Multimedia","author":"Colley Ashley","year":"2022","unstructured":"Ashley Colley, Kaisa V\u00e4\u00e4n\u00e4nen, and Jonna H\u00e4kkil\u00e4. 2022. Tangible Explainable AI-an initial conceptual framework. In Proceedings of the 21st International Conference on Mobile and Ubiquitous Multimedia, 22\u201327."},{"issue":"1","key":"e_1_3_1_41_2","first-page":"160940691879747","article-title":"The central role of theory in qualitative research","volume":"17","author":"Collins Christopher S.","year":"2018","unstructured":"Christopher S. Collins and Carrie M. Stockton. 2018. The central role of theory in qualitative research. International Journal of Qualitative Methods 17, 1 (2018), 1609406918797475.","journal-title":"International Journal of Qualitative Methods"},{"issue":"1","key":"e_1_3_1_42_2","doi-asserted-by":"crossref","first-page":"niae013","DOI":"10.1093\/nc\/niae013","article-title":"Folk psychological attributions of consciousness to large language models","volume":"2024","author":"Colombatto Clara","year":"2024","unstructured":"Clara Colombatto and Stephen M. Fleming. 2024. Folk psychological attributions of consciousness to large language models. Neuroscience of Consciousness 2024, 1 (2024), niae013.","journal-title":"Neuroscience of Consciousness"},{"key":"e_1_3_1_43_2","doi-asserted-by":"crossref","first-page":"102423","DOI":"10.1016\/j.artmed.2022.102423","article-title":"A manifesto on explainability for artificial intelligence in medicine","volume":"133","author":"Combi Carlo","year":"2022","unstructured":"Carlo Combi, Beatrice Amico, Riccardo Bellazzi, Andreas Holzinger, Jason H. Moore, Marinka Zitnik, and John H. Holmes. 2022. A manifesto on explainability for artificial intelligence in medicine. Artificial Intelligence in Medicine 133, (2022), 102423.","journal-title":"Artificial Intelligence in Medicine"},{"issue":"1","key":"e_1_3_1_44_2","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1109\/MIS.2023.3334639","article-title":"An operational framework for guiding human evaluation in explainable and trustworthy AI","volume":"39","author":"Confalonieri Roberto","year":"2023","unstructured":"Roberto Confalonieri and Jose M. Alonso-Moral. 2023. An operational framework for guiding human evaluation in explainable and trustworthy AI. IEEE Intelligent Systems 39, 1 (2023), 18\u201328.","journal-title":"IEEE Intelligent Systems"},{"issue":"10","key":"e_1_3_1_45_2","doi-asserted-by":"crossref","first-page":"13101","DOI":"10.1109\/TNNLS.2023.3270027","article-title":"Toward explainable affective computing: A review","volume":"35","author":"Corti\u00f1as-Lorenzo Karina","year":"2023","unstructured":"Karina Corti\u00f1as-Lorenzo and Gerard Lacey. 2023. Toward explainable affective computing: A review. IEEE Transactions on Neural Networks and Learning Systems 35, 10 (2023), 13101\u201313121.","journal-title":"IEEE Transactions on Neural Networks and Learning Systems"},{"issue":"5","key":"e_1_3_1_46_2","doi-asserted-by":"crossref","first-page":"455","DOI":"10.1007\/s11257-008-9051-3","article-title":"The effects of transparency on trust in and acceptance of a content-based art recommender","volume":"18","author":"Cramer Henriette","year":"2008","unstructured":"Henriette Cramer, Vanessa Evers, Satyan Ramlal, Maarten Van Someren, Lloyd Rutledge, Natalia Stash, Lora Aroyo, and Bob Wielinga. 2008. The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-Adapted Interaction 18, 5 (2008), 455\u2013496.","journal-title":"User Modeling and User-Adapted Interaction"},{"key":"e_1_3_1_47_2","volume-title":"Research Design: Qualitative, Quantitative, and Mixed Methods Approaches","author":"Creswell John W.","year":"2017","unstructured":"John W. Creswell and J. David Creswell. 2017. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Sage Publications."},{"key":"e_1_3_1_48_2","doi-asserted-by":"crossref","first-page":"315","DOI":"10.1145\/3511808.3557247","volume-title":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","author":"Cugny Robin","year":"2022","unstructured":"Robin Cugny, Julien Aligon, Max Chevalier, Geoffrey Roman Jimenez, and Olivier Teste. 2022. AutoXAI: A framework to automatically select the most adapted XAI solution. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 315\u2013324."},{"key":"e_1_3_1_49_2","doi-asserted-by":"crossref","first-page":"1635","DOI":"10.1145\/2556288.2557342","volume-title":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","author":"Dalsgaard Peter","year":"2014","unstructured":"Peter Dalsgaard and Christian Dindler. 2014. Between theory and practice: Bridging concepts in HCI research. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1635\u20131644."},{"key":"e_1_3_1_50_2","first-page":"1","volume-title":"Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems","author":"Danry Valdemar","year":"2025","unstructured":"Valdemar Danry, Pat Pataranutaporn, Matthew Groh, and Ziv Epstein. 2025. Deceptive explanations by large language models lead people to change their beliefs about misinformation more often than honest explanations. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1\u201331."},{"key":"e_1_3_1_51_2","unstructured":"Arun Das and Paul Rad. 2020. Opportunities and challenges in explainable artificial intelligence (XAI): A survey. arXiv:2006.11371. Retrieved from https:\/\/arxiv.org\/abs\/2006.11371"},{"issue":"23","key":"e_1_3_1_52_2","doi-asserted-by":"crossref","first-page":"16893","DOI":"10.1007\/s00521-023-08423-1","article-title":"Explainable reinforcement learning for broad-XAI: A conceptual framework and survey","volume":"35","author":"Dazeley Richard","year":"2023","unstructured":"Richard Dazeley, Peter Vamplew, and Francisco Cruz. 2023. Explainable reinforcement learning for broad-XAI: A conceptual framework and survey. Neural Computing and Applications 35, 23 (2023), 16893\u201316916.","journal-title":"Neural Computing and Applications"},{"key":"e_1_3_1_53_2","doi-asserted-by":"crossref","first-page":"103525","DOI":"10.1016\/j.artint.2021.103525","article-title":"Levels of explainable artificial intelligence for human-aligned conversational explanations","volume":"299","author":"Dazeley Richard","year":"2021","unstructured":"Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Aryal, and Francisco Cruz. 2021. Levels of explainable artificial intelligence for human-aligned conversational explanations. Artificial Intelligence 299 (2021), 103525.","journal-title":"Artificial Intelligence"},{"key":"e_1_3_1_54_2","volume-title":"Proceedings of 2017 AAAI Fall Symposium Series","author":"De Graaf Maartje M. A.","year":"2017","unstructured":"Maartje M. A. De Graaf and Bertram F. Malle. 2017. How people explain action (and autonomous intelligent systems should too). In Proceedings of 2017 AAAI Fall Symposium Series."},{"key":"e_1_3_1_55_2","first-page":"130","volume-title":"Proceedings of International Conference on Human-Computer Interaction","author":"de Oliveira Carvalho Niltemberg","year":"2022","unstructured":"Niltemberg de Oliveira Carvalho, Andr\u00e9ia Lib\u00f3rio Sampaio, and Davi Romero de Vasconcelos. 2022. MoreXAI: A model to reason about the explanation design in AI systems. In Proceedings of International Conference on Human-Computer Interaction. Springer, 130\u2013148."},{"issue":"3","key":"e_1_3_1_56_2","doi-asserted-by":"crossref","first-page":"342","DOI":"10.1287\/isre.7.3.342","article-title":"The use and effects of knowledge-based system explanations: Theoretical foundations and a framework for empirical evaluation","volume":"7","author":"Dhaliwal Jasbir S.","year":"1996","unstructured":"Jasbir S. Dhaliwal and Izak Benbasat. 1996. The use and effects of knowledge-based system explanations: Theoretical foundations and a framework for empirical evaluation. Information Systems Research 7, 3 (1996), 342\u2013362.","journal-title":"Information Systems Research"},{"key":"e_1_3_1_57_2","first-page":"1","article-title":"How do people develop folk theories of generative AI text-to-image models? A qualitative study on how people strive to explain and make sense of GenAI","author":"Di Lodovico Chiara","year":"2025","unstructured":"Chiara Di Lodovico, Federico Torrielli, Luigi Di Caro, and Amon Ra. 2025. How do people develop folk theories of generative AI text-to-image models? A qualitative study on how people strive to explain and make sense of GenAI. International Journal of Human\u2013Computer Interaction 41 (2025), 1\u201325.","journal-title":"International Journal of Human\u2013Computer Interaction"},{"key":"e_1_3_1_58_2","doi-asserted-by":"crossref","first-page":"143","DOI":"10.1016\/j.inffus.2021.11.017","article-title":"A novel model usability evaluation framework (MUsE) for explainable artificial intelligence","volume":"81","author":"Dieber J\u00fcrgen","year":"2022","unstructured":"J\u00fcrgen Dieber and Sabrina Kirrane. 2022. A novel model usability evaluation framework (MUsE) for explainable artificial intelligence. Information Fusion 81, (2022), 143\u2013153.","journal-title":"Information Fusion"},{"issue":"1","key":"e_1_3_1_59_2","doi-asserted-by":"crossref","first-page":"114","DOI":"10.1037\/xge0000033","article-title":"Algorithm aversion: People erroneously avoid algorithms after seeing them err","volume":"144","author":"Dietvorst Berkeley J.","year":"2015","unstructured":"Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144, 1 (2015), 114.","journal-title":"Journal of Experimental Psychology: General"},{"key":"e_1_3_1_60_2","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-030-30371-6","volume-title":"Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way","author":"Dignum Virginia","year":"2019","unstructured":"Virginia Dignum. 2019. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way, Vol. 2156. Springer."},{"key":"e_1_3_1_61_2","first-page":"51","volume-title":"Proceedings of International Conference of the Italian Association for Artificial Intelligence","author":"Donadello Ivan","year":"2020","unstructured":"Ivan Donadello and Mauro Dragoni. 2020. SeXAI: A semantic explainable artificial intelligence framework. In Proceedings of International Conference of the Italian Association for Artificial Intelligence. Springer, 51\u201366."},{"key":"e_1_3_1_62_2","unstructured":"Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv:1702.08608. Retrieved from https:\/\/arxiv.org\/abs\/1702.08608"},{"key":"e_1_3_1_63_2","first-page":"1","volume-title":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","author":"Ehsan Upol","year":"2021","unstructured":"Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, and Justin D. Weisz. 2021. Expanding explainability: Towards social transparency in AI systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1\u201319."},{"key":"e_1_3_1_64_2","first-page":"449","volume-title":"Proceedings of HCI International 2020-Late Breaking Papers: Multimodality and Intelligence: 22nd HCI International Conference (HCII \u201920)","author":"Ehsan Upol","year":"2020","unstructured":"Upol Ehsan and Mark O. Riedl. 2020. Human-centered Explainable AI: Towards a reflective sociotechnical approach. In Proceedings of HCI International 2020-Late Breaking Papers: Multimodality and Intelligence: 22nd HCI International Conference (HCII \u201920). Springer, 449\u2013466."},{"issue":"6","key":"e_1_3_1_65_2","article-title":"Explainability pitfalls: Beyond dark patterns in Explainable AI","volume":"5","author":"Ehsan Upol","year":"2024","unstructured":"Upol Ehsan and Mark O. Riedl. 2024. Explainability pitfalls: Beyond dark patterns in Explainable AI. Patterns 5, 6 (2024), 100971.","journal-title":"Patterns"},{"issue":"1","key":"e_1_3_1_66_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3579467","article-title":"Charting the sociotechnical gap in Explainable AI: A framework to address the gap in XAI","volume":"7","author":"Ehsan Upol","year":"2023","unstructured":"Upol Ehsan, Koustuv Saha, Munmun De Choudhury, and Mark O. Riedl. 2023. Charting the sociotechnical gap in Explainable AI: A framework to address the gap in XAI. In Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (2023), 1\u201332.","journal-title":"Proceedings of the ACM on Human-Computer Interaction"},{"key":"e_1_3_1_67_2","first-page":"1","volume-title":"Extended Abstracts of the CHI Conference on Human Factors in Computing Systems","author":"Ehsan Upol","year":"2024","unstructured":"Upol Ehsan, Elizabeth A. Watkins, Philipp Wintersberger, Carina Manger, Sunnie S. Y. Kim, Niels Van Berkel, Andreas Riener, and Mark O. Riedl. 2024. Human-centered Explainable AI (HCXAI): Reloading explainability in the era of Large Language Models (LLMs). In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 1\u20136."},{"key":"e_1_3_1_68_2","first-page":"1","volume-title":"CHI Conference on Human Factors in Computing Systems Extended Abstracts","author":"Ehsan Upol","year":"2022","unstructured":"Upol Ehsan, Philipp Wintersberger, Q. Vera Liao, Elizabeth Anne Watkins, Carina Manger, Hal Daum\u00e9, III, Andreas Riener, and Mark O. Riedl. 2022. Human-centered Explainable AI (HCXAI): Beyond opening the black-box of AI. In CHI Conference on Human Factors in Computing Systems Extended Abstracts, 1\u20137."},{"key":"e_1_3_1_69_2","first-page":"1","volume-title":"Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems","author":"Eiband Malin","year":"2019","unstructured":"Malin Eiband, Daniel Buschek, Alexander Kremer, and Heinrich Hussmann. 2019. The impact of placebic explanations on trust in intelligent systems. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 1\u20136."},{"key":"e_1_3_1_70_2","first-page":"211","volume-title":"Proceedings of 23rd International Conference on Intelligent User Interfaces","author":"Eiband Malin","year":"2018","unstructured":"Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing transparency design into practice. In Proceedings of 23rd International Conference on Intelligent User Interfaces, 211\u2013223."},{"key":"e_1_3_1_71_2","unstructured":"Eva Eigner and Thorsten H\u00e4ndler. 2024. Determinants of LLM-assisted decision-making. arXiv:2402.17385. Retrieved from https:\/\/arxiv.org\/abs\/2402.17385"},{"key":"e_1_3_1_72_2","doi-asserted-by":"crossref","first-page":"107574","DOI":"10.1016\/j.chb.2022.107574","article-title":"Supporting human-AI teams: Transparency, explainability, and situation awareness","volume":"140","author":"Endsley Mica R.","year":"2023","unstructured":"Mica R. Endsley. 2023. Supporting human-AI teams: Transparency, explainability, and situation awareness. Computers in Human Behavior 140 (2023), 107574.","journal-title":"Computers in Human Behavior"},{"key":"e_1_3_1_73_2","first-page":"1","article-title":"Regulation (EU) 2024\/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)","volume":"168","author":"European Union","year":"2024","unstructured":"European Union. 2024. Regulation (EU) 2024\/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 168 (2024), 1\u2013247. Retrieved from https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/?uri=CELEX:32024R1689","journal-title":"Official Journal of the European Union"},{"key":"e_1_3_1_74_2","first-page":"1014","volume-title":"Proceedings of 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC)","author":"Friedman Scott E.","year":"2021","unstructured":"Scott E. Friedman, Jeffrey Rye, Matthew McLure, Helen C. Wauck, Pooja Patel, Ruta Wheelock, Mark A. Valovage, Steven Johnston, and Christopher Miller. 2021. Provenance as a substrate for human sensemaking and explanation of machine collaborators. In Proceedings of 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 1014\u20131019."},{"key":"e_1_3_1_75_2","doi-asserted-by":"crossref","first-page":"105904","DOI":"10.1016\/j.engappai.2023.105904","article-title":"An explainable decision support system for predictive process analytics","volume":"120","author":"Galanti Riccardo","year":"2023","unstructured":"Riccardo Galanti, Massimiliano de Leoni, Merylin Monaro, Nicol\u00f2 Navarin, Alan Marazzi, Brigida Di Stasi, and St\u00e9phanie Maldera. 2023. An explainable decision support system for predictive process analytics. Engineering Applications of Artificial Intelligence 120 (2023), 105904.","journal-title":"Engineering Applications of Artificial Intelligence"},{"key":"e_1_3_1_76_2","unstructured":"Deep Ganguli Nicholas Schiefer Marina Favaro and Jack Clark. 2023. Challenges in Evaluating AI Systems. Retrieved July 18 2025 from https:\/\/www.anthropic.com\/research\/evaluating-ai-systems"},{"key":"e_1_3_1_77_2","first-page":"458","volume-title":"Proceedings of International Conference on Intelligent Data Engineering and Automated Learning","author":"Garouani Moncef","year":"2023","unstructured":"Moncef Garouani and Mourad Bouneffa. 2023. Unlocking the black box: Towards interactive explainable automated machine learning. In Proceedings of International Conference on Intelligent Data Engineering and Automated Learning. Springer, 458\u2013469."},{"key":"e_1_3_1_78_2","unstructured":"Audrey Girouard and Erin Treacy Solovey. 2018. Reflecting on the Impact of HCI Frameworks. Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:3585981"},{"issue":"2","key":"e_1_3_1_79_2","first-page":"4","article-title":"Understanding, selecting, and integrating a theoretical framework in dissertation research: Creating the blueprint for your \u201chouse\u201d","volume":"4","author":"Grant Cynthia","year":"2014","unstructured":"Cynthia Grant and Azadeh Osanloo. 2014. Understanding, selecting, and integrating a theoretical framework in dissertation research: Creating the blueprint for your \u201chouse\u201d. Administrative Issues Journal 4, 2 (2014), 4.","journal-title":"Administrative Issues Journal"},{"issue":"6","key":"e_1_3_1_80_2","article-title":"Use of theoretical and conceptual frameworks in qualitative research","volume":"21","author":"Green Helen Elise","year":"2014","unstructured":"Helen Elise Green. 2014. Use of theoretical and conceptual frameworks in qualitative research. Nurse Researcher 21, 6 (2014).","journal-title":"Nurse Researcher"},{"issue":"3","key":"e_1_3_1_81_2","doi-asserted-by":"crossref","first-page":"611","DOI":"10.2307\/25148742","article-title":"The nature of theory in information systems","volume":"30","author":"Gregor Shirley","year":"2006","unstructured":"Shirley Gregor. 2006. The nature of theory in information systems. MIS Quarterly 30, 3 (Sep. 2006), 611\u2013642.","journal-title":"MIS Quarterly"},{"key":"e_1_3_1_82_2","first-page":"1","volume-title":"Extended Abstracts of the CHI Conference on Human Factors in Computing Systems","author":"Guo Jiajing","year":"2024","unstructured":"Jiajing Guo, Vikram Mohanty, Jorge H. Piazentin Ono, Hongtao Hao, Liang Gou, and Liu Ren. 2024. Investigating interaction modes and user agency in human-LLM collaboration for domain-specific data analysis. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 1\u20139."},{"key":"e_1_3_1_83_2","doi-asserted-by":"crossref","first-page":"162","DOI":"10.1109\/REW56159.2022.00038","volume-title":"Proceedings of 2022 IEEE 30th International Requirements Engineering Conference Workshops (REW)","author":"Habiba Umm-E","year":"2022","unstructured":"Umm-E Habiba, Justus Bogner, and Stefan Wagner. 2022. Can Requirements Engineering support explainable artificial intelligence? towards a user-centric approach for explainability requirements. In Proceedings of 2022 IEEE 30th International Requirements Engineering Conference Workshops (REW). IEEE, 162\u2013165."},{"key":"e_1_3_1_84_2","doi-asserted-by":"crossref","first-page":"243","DOI":"10.1023\/A:1015298005381","article-title":"Activity theory and distributed cognition: Or what does CSCW need to DO with theories","volume":"11","author":"Halverson Christine A.","year":"2002","unstructured":"Christine A. Halverson. 2002. Activity theory and distributed cognition: Or what does CSCW need to DO with theories? Computer Supported Cooperative Work (CSCW) 11 (2002), 243\u2013267.","journal-title":"Computer Supported Cooperative Work (CSCW)"},{"issue":"1","key":"e_1_3_1_85_2","doi-asserted-by":"crossref","first-page":"90","DOI":"10.1016\/j.ijhcs.2008.09.008","article-title":"Designs for explaining intelligent agents","volume":"67","author":"Haynes Steven R.","year":"2009","unstructured":"Steven R. Haynes, Mark A. Cohen, and Frank E. Ritter. 2009. Designs for explaining intelligent agents. International Journal of Human-Computer Studies 67, 1 (2009), 90\u2013110.","journal-title":"International Journal of Human-Computer Studies"},{"key":"e_1_3_1_86_2","first-page":"305","volume-title":"Proceedings of International Conference on Artificial Intelligence and Security","author":"He Mingshu","year":"2021","unstructured":"Mingshu He, Lei Jin, and Mei Song. 2021. Interpretability framework of network security traffic classification based on machine learning. In Proceedings of International Conference on Artificial Intelligence and Security. Springer, 305\u2013320."},{"issue":"1","key":"e_1_3_1_87_2","doi-asserted-by":"crossref","first-page":"157","DOI":"10.1097\/ACM.0000000000002902","article-title":"Uncertainty in decision making in medicine: A scoping review and thematic analysis of conceptual models","volume":"95","author":"Helou Marieka A.","year":"2020","unstructured":"Marieka A. Helou, Deborah DiazGranados, Michael S. Ryan, and John W. Cyrus. 2020. Uncertainty in decision making in medicine: A scoping review and thematic analysis of conceptual models. Academic Medicine 95, 1 (2020), 157\u2013165.","journal-title":"Academic Medicine"},{"key":"e_1_3_1_88_2","doi-asserted-by":"crossref","first-page":"241","DOI":"10.1145\/358916.358995","volume-title":"Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work","author":"Herlocker Jonathan L.","year":"2000","unstructured":"Jonathan L. Herlocker, Joseph A. Konstan, and John Riedl. 2000. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, 241\u2013250."},{"key":"e_1_3_1_89_2","unstructured":"Robert R. Hoffman William J. Clancey and Shane T. Mueller. 2020. Explaining AI as an exploratory process: The Peircean abduction model. arXiv:2009.14795. Retrieved from https:\/\/arxiv.org\/abs\/2009.14795"},{"key":"e_1_3_1_90_2","doi-asserted-by":"crossref","first-page":"1114806","DOI":"10.3389\/fcomp.2023.1114806","article-title":"Evaluating machine-generated explanations: A \u201cscorecard\u201d method for XAI measurement science","volume":"5","author":"Hoffman Robert R.","year":"2023","unstructured":"Robert R. Hoffman, Mohammadreza Jalaeian, Connor Tate, Gary Klein, and Shane T. Mueller. 2023. Evaluating machine-generated explanations: A \u201cscorecard\u201d method for XAI measurement science. Frontiers in Computer Science 5 (2023), 1114806.","journal-title":"Frontiers in Computer Science"},{"key":"e_1_3_1_91_2","doi-asserted-by":"crossref","first-page":"1117848","DOI":"10.3389\/fcomp.2023.1117848","article-title":"Explainable AI: Roles and stakeholders, desirements and challenges","volume":"5","author":"Hoffman Robert R.","year":"2023","unstructured":"Robert R. Hoffman, Shane T. Mueller, Gary Klein, Mohammadreza Jalaeian, and Connor Tate. 2023. Explainable AI: Roles and stakeholders, desirements and challenges. Frontiers in Computer Science 5 (2023), 1117848.","journal-title":"Frontiers in Computer Science"},{"issue":"2","key":"e_1_3_1_92_2","doi-asserted-by":"crossref","first-page":"174","DOI":"10.1145\/353485.353487","article-title":"Distributed cognition: Toward a new foundation for human-computer interaction research","volume":"7","author":"Hollan James","year":"2000","unstructured":"James Hollan, Edwin Hutchins, and David Kirsh. 2000. Distributed cognition: Toward a new foundation for human-computer interaction research. ACM Transactions on Computer-Human Interaction 7, 2 (2000), 174\u2013196.","journal-title":"ACM Transactions on Computer-Human Interaction"},{"issue":"3","key":"e_1_3_1_93_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/2362364.2362371","article-title":"Strong concepts: Intermediate-level knowledge in interaction design research","volume":"19","author":"H\u00f6\u00f6k Kristina","year":"2012","unstructured":"Kristina H\u00f6\u00f6k and Jonas L\u00f6wgren. 2012. Strong concepts: Intermediate-level knowledge in interaction design research. ACM Transactions on Computer-Human Interaction 19, 3 (2012), 1\u201318.","journal-title":"ACM Transactions on Computer-Human Interaction"},{"key":"e_1_3_1_94_2","first-page":"15760","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence","volume":"37","author":"Hu Brian","year":"2023","unstructured":"Brian Hu, Paul Tunison, Brandon RichardWebster, and Anthony Hoogs. 2023. XAItk-saliency: An open source Explainable AI toolkit for saliency. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37, 15760\u201315766."},{"issue":"4","key":"e_1_3_1_95_2","doi-asserted-by":"crossref","first-page":"e40","DOI":"10.1002\/ail2.40","article-title":"XAITK: The explainable AI toolkit","volume":"2","author":"Hu Brian","year":"2021","unstructured":"Brian Hu, Paul Tunison, Bhavan Vasu, Nitesh Menon, Roddy Collins, and Anthony Hoogs. 2021. XAITK: The explainable AI toolkit. Applied AI Letters 2, 4 (2021), e40.","journal-title":"Applied AI Letters"},{"key":"e_1_3_1_96_2","doi-asserted-by":"crossref","first-page":"104365","DOI":"10.1016\/j.jbi.2023.104365","article-title":"Explainable discovery of disease biomarkers: The case of ovarian cancer to illustrate the best practice in machine learning and Shapley analysis","volume":"141","author":"Huang Weitong","year":"2023","unstructured":"Weitong Huang, Hanna Suominen, Tommy Liu, Gregory Rice, Carlos Salomon, and Amanda S. Barnard. 2023. Explainable discovery of disease biomarkers: The case of ovarian cancer to illustrate the best practice in machine learning and Shapley analysis. Journal of Biomedical Informatics 141 (2023), 104365.","journal-title":"Journal of Biomedical Informatics"},{"issue":"2","key":"e_1_3_1_97_2","doi-asserted-by":"crossref","first-page":"185","DOI":"10.1080\/09718923.2014.11893249","article-title":"Is there a conceptual difference between theoretical and conceptual frameworks","volume":"38","author":"Imenda Sitwala","year":"2014","unstructured":"Sitwala Imenda. 2014. Is there a conceptual difference between theoretical and conceptual frameworks? Journal of Social Sciences 38, 2 (2014), 185\u2013195.","journal-title":"Journal of Social Sciences"},{"issue":"3","key":"e_1_3_1_98_2","doi-asserted-by":"crossref","first-page":"1353","DOI":"10.3390\/app12031353","article-title":"A systematic review of explainable artificial intelligence in terms of different application domains and tasks","volume":"12","author":"Islam Mir Riyanul","year":"2022","unstructured":"Mir Riyanul Islam, Mobyen Uddin Ahmed, Shaibal Barua, and Shahina Begum. 2022. A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Applied Sciences 12, 3 (2022), 1353.","journal-title":"Applied Sciences"},{"key":"e_1_3_1_99_2","doi-asserted-by":"crossref","first-page":"459","DOI":"10.1613\/jair.1.14053","article-title":"Diagnosing AI explanation methods with folk concepts of behavior","volume":"78","author":"Jacovi Alon","year":"2023","unstructured":"Alon Jacovi, Jasmijn Bastings, Sebastian Gehrmann, Yoav Goldberg, and Katja Filippova. 2023. Diagnosing AI explanation methods with folk concepts of behavior. Journal of Artificial Intelligence Research 78 (2023), 459\u2013489.","journal-title":"Journal of Artificial Intelligence Research"},{"key":"e_1_3_1_100_2","first-page":"177","volume-title":"Proceedings of 2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC)","author":"Jaigirdar Fariha Tasmin","year":"2020","unstructured":"Fariha Tasmin Jaigirdar, Carsten Rudolph, Gillian Oliver, David Watts, and Chris Bain. 2020. What information is required for explainable AI?: A provenance-based research agenda and future challenges. In Proceedings of 2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC). IEEE, 177\u2013183."},{"key":"e_1_3_1_101_2","doi-asserted-by":"crossref","first-page":"108944","DOI":"10.1016\/j.ijar.2023.108944","article-title":"A general framework for personalising post hoc explanations through user knowledge integration","volume":"160","author":"Jeyasothy Adulam","year":"2023","unstructured":"Adulam Jeyasothy, Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, and Marcin Detyniecki. 2023. A general framework for personalising post hoc explanations through user knowledge integration. International Journal of Approximate Reasoning 160 (2023), 108944.","journal-title":"International Journal of Approximate Reasoning"},{"key":"e_1_3_1_102_2","first-page":"1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"7","author":"Jin Yucheng","year":"2023","unstructured":"Yucheng Jin, Wanling Cai, Li Chen, Yuwan Dai, and Tonglin Jiang. 2023. Understanding disclosure and support for youth mental health in social music communities. Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (2023), 1\u201332."},{"key":"e_1_3_1_103_2","first-page":"1","volume-title":"Proceedings of the CHI Conference on Human Factors in Computing Systems","author":"Jin Yucheng","year":"2024","unstructured":"Yucheng Jin, Wanling Cai, Li Chen, Yizhe Zhang, Gavin Doherty, and Tonglin Jiang. 2024. Exploring the design of generative AI in supporting music-based reminiscence for older adults. In Proceedings of the CHI Conference on Human Factors in Computing Systems, 1\u201317."},{"key":"e_1_3_1_104_2","doi-asserted-by":"crossref","first-page":"103290","DOI":"10.1016\/j.ijhcs.2024.103290","article-title":"The way you assess matters: User interaction design of survey chatbots for mental health","volume":"189","author":"Jin Yucheng","year":"2024","unstructured":"Yucheng Jin, Li Chen, Xianglin Zhao, and Wanling Cai. 2024. The way you assess matters: User interaction design of survey chatbots for mental health. International Journal of Human-Computer Studies 189 (2024), 103290.","journal-title":"International Journal of Human-Computer Studies"},{"key":"e_1_3_1_105_2","doi-asserted-by":"crossref","DOI":"10.4324\/9781315131467","volume-title":"The Conduct of Inquiry: Methodology for Behavioural Science","author":"Kaplan Abraham","year":"2017","unstructured":"Abraham Kaplan. 2017. The Conduct of Inquiry: Methodology for Behavioural Science. Routledge."},{"key":"e_1_3_1_106_2","doi-asserted-by":"crossref","unstructured":"Atoosa Kasirzadeh. 2021. Reasons values stakeholders: A philosophical framework for explainable artificial intelligence. arXiv:2103.00752. Retrieved from https:\/\/arxiv.org\/abs\/2103.00752","DOI":"10.1145\/3442188.3445866"},{"key":"e_1_3_1_107_2","doi-asserted-by":"crossref","first-page":"702","DOI":"10.1145\/3531146.3533135","volume-title":"Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency","author":"Kaur Harmanpreet","year":"2022","unstructured":"Harmanpreet Kaur, Eytan Adar, Eric Gilbert, and Cliff Lampe. 2022. Sensible AI: Re-imagining interpretability and explainability using sensemaking theory. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 702\u2013714."},{"key":"e_1_3_1_108_2","first-page":"1","volume-title":"Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems","author":"Kaur Harmanpreet","year":"2020","unstructured":"Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting interpretability: Understanding data scientists\u2019 use of interpretability tools for machine learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1\u201314."},{"key":"e_1_3_1_109_2","volume-title":"Proceedings of the ACM Annual Conference","author":"Kay Alan C.","year":"2011","unstructured":"Alan C. Kay. 2011. A personal computer for children of all ages. In Proceedings of the ACM Annual Conference, Vol. 1."},{"key":"e_1_3_1_110_2","doi-asserted-by":"crossref","first-page":"926","DOI":"10.3389\/fpsyg.2020.00926","article-title":"The concept and components of engagement in different domains applied to eHealth: A systematic scoping review","volume":"11","author":"Kelders Saskia M.","year":"2020","unstructured":"Saskia M. Kelders, Llewellyn Ellardus Van Zyl, and Geke D. S. Ludden. 2020. The concept and components of engagement in different domains applied to eHealth: A systematic scoping review. Frontiers in Psychology 11 (2020), 926.","journal-title":"Frontiers in Psychology"},{"key":"e_1_3_1_111_2","doi-asserted-by":"crossref","first-page":"5805","DOI":"10.1145\/3580305.3599557","volume-title":"Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining","author":"Kenthapadi Krishnaram","year":"2023","unstructured":"Krishnaram Kenthapadi, Himabindu Lakkaraju, and Nazneen Rajani. 2023. Generative AI meets responsible AI: Practical challenges and opportunities. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 5805\u20135806."},{"key":"e_1_3_1_112_2","unstructured":"Fred Nichols Kerlinger. 1966. Foundations of Behavioral Research. Holt Rinehart and Winston New York NY."},{"key":"e_1_3_1_113_2","first-page":"1","article-title":"Patient-relevant outcomes: What are we talking about? A scoping review to improve conceptual clarity","volume":"20","author":"Kersting Christine","year":"2020","unstructured":"Christine Kersting, Malte Kneer, and Anne Barzel. 2020. Patient-relevant outcomes: What are we talking about? A scoping review to improve conceptual clarity. BMC Health Services Research 20 (2020), 1\u201316.","journal-title":"BMC Health Services Research"},{"key":"e_1_3_1_114_2","first-page":"10244","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Khakzar Ashkan","unstructured":"Ashkan Khakzar and Pedram Khorsandi, Rozhin Nobahari, and Nassir Navab. 2022. Do explanations explain? Model knows best. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 10244\u201310253."},{"key":"e_1_3_1_115_2","doi-asserted-by":"crossref","first-page":"99686","DOI":"10.1109\/ACCESS.2022.3207812","article-title":"Toward accountable and explainable artificial intelligence part one: Theory and examples","volume":"10","author":"Khan Masood M.","year":"2022","unstructured":"Masood M. Khan and Jordan Vice. 2022. Toward accountable and explainable artificial intelligence part one: Theory and examples. IEEE Access 10 (2022), 99686\u201399701.","journal-title":"IEEE Access"},{"key":"e_1_3_1_116_2","first-page":"100074","article-title":"Explainable artificial intelligence in education","volume":"3","author":"Khosravi Hassan","year":"2022","unstructured":"Hassan Khosravi, Simon Buckingham Shum, Guanliang Chen, Cristina Conati, Yi-Shan Tsai, Judy Kay, Simon Knight, Roberto Martinez-Maldonado, Shazia Sadiq, and Dragan Ga\u0161evi\u0107. 2022. Explainable artificial intelligence in education. Computers and Education: Artificial Intelligence 3 (2022), 100074.","journal-title":"Computers and Education: Artificial Intelligence"},{"issue":"4","key":"e_1_3_1_117_2","doi-asserted-by":"crossref","first-page":"900","DOI":"10.3390\/make3040045","article-title":"A multi-component framework for the analysis and design of explainable artificial intelligence","volume":"3","author":"Kim Mi-Young","year":"2021","unstructured":"Mi-Young Kim, Shahin Atakishiyev, Housam Khalifa Bashier Babiker, Nawshad Farruque, Randy Goebel, Osmar R. Za\u00efane, Mohammad-Hossein Motallebi, Juliano Rabelo, Talat Syed, Hengshuai Yao, et al. 2021. A multi-component framework for the analysis and design of explainable artificial intelligence. Machine Learning and Knowledge Extraction 3, 4 (2021), 900\u2013921.","journal-title":"Machine Learning and Knowledge Extraction"},{"key":"e_1_3_1_118_2","first-page":"252","volume-title":"Proceedings of the 2021 10th International Conference on Computing and Pattern Recognition","author":"Kim Sebin","year":"2021","unstructured":"Sebin Kim and Jihwan Woo. 2021. Explainable AI framework for the financial rating models: Explaining framework that focuses on the feature influences on the changing classes or rating in various customer models used by the financial institutions. In Proceedings of the 2021 10th International Conference on Computing and Pattern Recognition, 252\u2013255."},{"key":"e_1_3_1_119_2","first-page":"280","volume-title":"Proceedings of European Conference on Computer Vision","author":"Kim Sunnie S. Y.","year":"2022","unstructured":"Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong, and Olga Russakovsky. 2022. HIVE: Evaluating the human interpretability of visual explanations. In Proceedings of European Conference on Computer Vision. Springer, 280\u2013298."},{"key":"e_1_3_1_120_2","first-page":"1","volume-title":"Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems","author":"Kim Sunnie S. Y.","year":"2025","unstructured":"Sunnie S. Y. Kim, Jennifer Wortman Vaughan, Q. Vera Liao, Tania Lombrozo, and Olga Russakovsky. 2025. Fostering appropriate reliance on large language models: The role of explanations, sources, and inconsistencies. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1\u201319."},{"issue":"6","key":"e_1_3_1_121_2","doi-asserted-by":"crossref","first-page":"44","DOI":"10.5430\/ijhe.v7n6p44","article-title":"Distinguishing between theory, theoretical framework, and conceptual framework: A systematic review of lessons from the field","volume":"7","author":"Kivunja Charles","year":"2018","unstructured":"Charles Kivunja. 2018. Distinguishing between theory, theoretical framework, and conceptual framework: A systematic review of lessons from the field. International Journal of Higher Education 7, 6 (2018), 44\u201353.","journal-title":"International Journal of Higher Education"},{"key":"e_1_3_1_122_2","first-page":"151","volume-title":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","author":"Ko Amy J.","year":"2004","unstructured":"Amy J. Ko and Brad A. Myers. 2004. Designing the whyline: A debugging interface for asking questions about program behavior. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 151\u2013158."},{"key":"e_1_3_1_123_2","doi-asserted-by":"crossref","first-page":"363","DOI":"10.1109\/RE.2019.00046","volume-title":"2019 IEEE 27th International Requirements Engineering Conference (RE)","author":"K\u00f6hl Maximilian A.","year":"2019","unstructured":"Maximilian A. K\u00f6hl, Kevin Baum, Markus Langer, Daniel Oster, Timo Speith, and Dimitri Bohlender. 2019. Explainability as a non-functional requirement. In 2019 IEEE 27th International Requirements Engineering Conference (RE). IEEE, 363\u2013368."},{"key":"e_1_3_1_124_2","first-page":"1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"4","author":"Kou Yubo","year":"2020","unstructured":"Yubo Kou and Xinning Gui. 2020. Mediating community-AI interaction through situated explanation: The case of AI-led moderation. In Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1\u201327."},{"key":"e_1_3_1_125_2","doi-asserted-by":"crossref","first-page":"126","DOI":"10.1145\/2678025.2701399","volume-title":"Proceedings of the 20th International Conference on Intelligent User Interfaces","author":"Kulesza Todd","year":"2015","unstructured":"Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces, 126\u2013137."},{"key":"e_1_3_1_126_2","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1109\/VLHCC.2013.6645235","volume-title":"Proceedings of 2013 IEEE Symposium on Visual Languages and Human Centric Computing","author":"Kulesza Todd","year":"2013","unstructured":"Todd Kulesza, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan, and Weng-Keen Wong. 2013. Too much, too little, or just right? Ways explanations impact end users\u2019 mental models. In Proceedings of 2013 IEEE Symposium on Visual Languages and Human Centric Computing. IEEE, 3\u201310."},{"issue":"3","key":"e_1_3_1_127_2","doi-asserted-by":"crossref","first-page":"e047001","DOI":"10.1136\/bmjopen-2020-047001","article-title":"Evaluating evaluation frameworks: A scoping review of frameworks for assessing health apps","volume":"11","author":"Lagan Sarah","year":"2021","unstructured":"Sarah Lagan, Lev Sandler, and John Torous. 2021. Evaluating evaluation frameworks: A scoping review of frameworks for assessing health apps. BMJ Open 11, 3 (2021), e047001.","journal-title":"BMJ Open"},{"key":"e_1_3_1_128_2","first-page":"1369","volume-title":"Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency","author":"Lai Vivian","year":"2023","unstructured":"Vivian Lai, Chacha Chen, Alison Smith-Renner, Q. Vera Liao, and Chenhao Tan. 2023. Towards a science of human-AI decision making: An overview of design space in empirical human-subject studies. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1369\u20131385."},{"key":"e_1_3_1_129_2","first-page":"1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"7","author":"Lai Vivian","year":"2023","unstructured":"Vivian Lai, Yiming Zhang, Chacha Chen, Q. Vera Liao, and Chenhao Tan. 2023. Selective explanations: Leveraging human input to align Explainable AI. In Proceedings of the ACM on Human-Computer Interaction 7, CSCW2 (2023), 1\u201335."},{"key":"e_1_3_1_130_2","first-page":"5628","volume-title":"Proceedings of International Conference on Machine Learning","author":"Lakkaraju Himabindu","year":"2020","unstructured":"Himabindu Lakkaraju, Nino Arsov, and Osbert Bastani. 2020. Robust and stable black box explanations. In Proceedings of International Conference on Machine Learning. PMLR, 5628\u20135638."},{"key":"e_1_3_1_131_2","doi-asserted-by":"crossref","first-page":"1675","DOI":"10.1145\/2939672.2939874","volume-title":"Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining","author":"Lakkaraju Himabindu","year":"2016","unstructured":"Himabindu Lakkaraju, Stephen H. Bach, and Jure Leskovec. 2016. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1675\u20131684."},{"key":"e_1_3_1_132_2","doi-asserted-by":"crossref","first-page":"79","DOI":"10.1145\/3375627.3375833","volume-title":"Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society","author":"Lakkaraju Himabindu","year":"2020","unstructured":"Himabindu Lakkaraju and Osbert Bastani. 2020. \u201cHow do I fool you?\u201d Manipulating user trust via misleading black box explanations. In Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society, 79\u201385."},{"issue":"4","key":"e_1_3_1_133_2","doi-asserted-by":"crossref","first-page":"407","DOI":"10.1080\/09528139508953820","article-title":"Abduction, experience, and goals: A model of everyday abductive explanation","volume":"7","author":"Leake David B.","year":"1995","unstructured":"David B. Leake. 1995. Abduction, experience, and goals: A model of everyday abductive explanation. Journal of Experimental & Theoretical Artificial Intelligence 7, 4 (1995), 407\u2013428.","journal-title":"Journal of Experimental & Theoretical Artificial Intelligence"},{"issue":"5","key":"e_1_3_1_134_2","doi-asserted-by":"crossref","first-page":"601","DOI":"10.1177\/0162243910377624","article-title":"This is not a boundary object: Reflections on the origin of a concept","volume":"35","author":"Leigh Star Susan","year":"2010","unstructured":"Susan Leigh Star. 2010. This is not a boundary object: Reflections on the origin of a concept. Science, Technology, & Human Values 35, 5 (2010), 601\u2013617.","journal-title":"Science, Technology, & Human Values"},{"key":"e_1_3_1_135_2","first-page":"1","volume-title":"Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems","author":"Liao Q. Vera","year":"2020","unstructured":"Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1\u201315."},{"key":"e_1_3_1_136_2","unstructured":"Q. Vera Liao Milena Pribi\u0107 Jaesik Han Sarah Miller and Daby Sow. 2021. Question-driven design process for Explainable AI user experiences. arXiv:2104.03483. Retrieved from https:\/\/arxiv.org\/abs\/2104.03483"},{"key":"e_1_3_1_137_2","unstructured":"Q. Vera Liao and Kush R. Varshney. 2021. Human-centered explainable AI (XAI): From algorithms to user experiences. arXiv:2110.10790. Retrieved from https:\/\/arxiv.org\/abs\/2110.10790"},{"key":"e_1_3_1_138_2","doi-asserted-by":"crossref","unstructured":"Q. Vera Liao and Jennifer Wortman Vaughan. 2023. AI transparency in the age of LLMS: A human-centered research roadmap. arXiv:2306.01941. Retrieved from https:\/\/arxiv.org\/abs\/2306.01941","DOI":"10.1162\/99608f92.8036d03b"},{"key":"e_1_3_1_139_2","first-page":"195","volume-title":"Proceedings of the 11th International Conference on Ubiquitous Computing","author":"Lim Brian Y.","year":"2009","unstructured":"Brian Y. Lim and Anind K. Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th International Conference on Ubiquitous Computing, 195\u2013204."},{"key":"e_1_3_1_140_2","first-page":"13","volume-title":"Proceedings of the 12th ACM International Conference on Ubiquitous Computing","author":"Lim Brian Y.","year":"2010","unstructured":"Brian Y. Lim and Anind K. Dey. 2010. Toolkit to support intelligibility in context-aware applications. In Proceedings of the 12th ACM International Conference on Ubiquitous Computing, 13\u201322."},{"key":"e_1_3_1_141_2","first-page":"157","volume-title":"Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services","author":"Lim Brian Y.","year":"2011","unstructured":"Brian Y. Lim and Anind K. Dey. 2011. Design of an intelligible mobile context-aware application. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services, 157\u2013166."},{"key":"e_1_3_1_142_2","first-page":"2119","volume-title":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","author":"Lim Brian Y.","year":"2009","unstructured":"Brian Y. Lim, Anind K. Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2119\u20132128."},{"key":"e_1_3_1_143_2","volume-title":"Approaches and Frameworks for HCI Research","author":"Long John","year":"2021","unstructured":"John Long. 2021. Approaches and Frameworks for HCI Research. Cambridge University Press."},{"key":"e_1_3_1_144_2","doi-asserted-by":"crossref","first-page":"102301","DOI":"10.1016\/j.inffus.2024.102301","article-title":"Explainable artificial intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions","author":"Longo Luca","year":"2024","unstructured":"Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, et al. 2024. Explainable artificial intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions. Information Fusion (2024), 102301.","journal-title":"Information Fusion"},{"key":"e_1_3_1_145_2","doi-asserted-by":"publisher","DOI":"10.17605\/OSF.IO\/CU6WD"},{"key":"e_1_3_1_146_2","doi-asserted-by":"crossref","unstructured":"Joy Lu Dokyun Lee Tae Wan Kim and David Danks. 2019. Good Explanation for Algorithmic Transparency. SSRN 3503603. Retrieved from https:\/\/ssrn.com\/abstract=3503603","DOI":"10.2139\/ssrn.3503603"},{"key":"e_1_3_1_147_2","doi-asserted-by":"publisher","DOI":"10.1177\/13548565251333212"},{"issue":"6","key":"e_1_3_1_148_2","doi-asserted-by":"crossref","first-page":"689","DOI":"10.1111\/medu.14431","article-title":"Scoping reviews in medical education: A scoping review","volume":"55","author":"Maggio Lauren A.","year":"2021","unstructured":"Lauren A. Maggio, Kelsey Larsen, Aliki Thomas, Joseph A. Costello, and Anthony R. Artino, Jr. 2021. Scoping reviews in medical education: A scoping review. Medical Education 55, 6 (2021), 689\u2013700.","journal-title":"Medical Education"},{"key":"e_1_3_1_149_2","first-page":"504","volume-title":"2023 19th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT)","author":"Makridis Georgios","unstructured":"Georgios Makridis, Georgios Fatouros, Athanasios Kiourtis, Dimitrios Kotios, Vasileios Koukos, Dimosthenis Kyriazis, and Jonh Soldatos. 2023. Towards a unified multidimensional explainability metric: Evaluating trustworthiness in AI models. In Proceedings of 2023 19th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT). IEEE, 504\u2013511."},{"key":"e_1_3_1_150_2","doi-asserted-by":"crossref","first-page":"103655","DOI":"10.1016\/j.jbi.2020.103655","article-title":"The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies","volume":"113","author":"Markus Aniek F.","year":"2021","unstructured":"Aniek F. Markus, Jan A. Kors, and Peter R. Rijnbeek. 2021. The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of Biomedical Informatics 113 (2021), 103655.","journal-title":"Journal of Biomedical Informatics"},{"key":"e_1_3_1_151_2","doi-asserted-by":"crossref","DOI":"10.7551\/mitpress\/9780262514620.001.0001","volume-title":"Vision: A Computational Investigation into the Human Representation and Processing of Visual Information","author":"Marr David","year":"2010","unstructured":"David Marr. 2010. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. MIT Press."},{"key":"e_1_3_1_152_2","first-page":"556","volume-title":"Proceedings of 2023 19th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT)","author":"Patino Martinez Marta","year":"2023","unstructured":"Marta Patino Martinez and Ainhoa Azqueta-Alzuaz. 2023. A no code XAI framework for policy making. In Proceedings of 2023 19th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT). IEEE, 556\u2013561."},{"issue":"3","key":"e_1_3_1_153_2","first-page":"225","article-title":"Framing tangible interaction frameworks","volume":"23","author":"Mazalek Ali","year":"2009","unstructured":"Ali Mazalek and Elise Van Den Hoven. 2009. Framing tangible interaction frameworks. AI Edam 23, 3 (2009), 225\u2013235.","journal-title":"AI Edam"},{"key":"e_1_3_1_154_2","first-page":"9","article-title":"How methodological frameworks are being developed: Evidence from a scoping review","volume":"20","author":"McMeekin Nicola","year":"2020","unstructured":"Nicola McMeekin, Olivia Wu, Evi Germeni, and Andrew Briggs. 2020. How methodological frameworks are being developed: Evidence from a scoping review. BMC Medical Research Methodology 20 (2020), 1\u20139.","journal-title":"BMC Medical Research Methodology"},{"key":"e_1_3_1_155_2","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-642-13757-0","volume-title":"Design Thinking: Understand-Improve-Apply","author":"Meinel Christoph","year":"2011","unstructured":"Christoph Meinel, Larry Leifer, and Hasso Plattner. 2011. Design Thinking: Understand-Improve-Apply. Springer."},{"issue":"4","key":"e_1_3_1_156_2","doi-asserted-by":"crossref","first-page":"e0249707","DOI":"10.1371\/journal.pone.0249707","article-title":"Family as a health promotion setting: A scoping review of conceptual models of the health-promoting family","volume":"16","author":"Michaelson Valerie","year":"2021","unstructured":"Valerie Michaelson, Kelly A. Pilato, and Colleen M. Davison. 2021. Family as a health promotion setting: A scoping review of conceptual models of the health-promoting family. PloS One 16, 4 (2021), e0249707.","journal-title":"PloS One"},{"key":"e_1_3_1_157_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.artint.2018.07.007","article-title":"Explanation in artificial intelligence: Insights from the social sciences","volume":"267","author":"Miller Tim","year":"2019","unstructured":"Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267, (2019), 1\u201338.","journal-title":"Artificial Intelligence"},{"key":"e_1_3_1_158_2","doi-asserted-by":"crossref","first-page":"333","DOI":"10.1145\/3593013.3594001","volume-title":"Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency","author":"Miller Tim","year":"2023","unstructured":"Tim Miller. 2023. Explainable AI is dead, long live Explainable AI! hypothesis-driven decision support using evaluative AI. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 333\u2013342."},{"key":"e_1_3_1_159_2","first-page":"1","article-title":"Explainable artificial intelligence: A comprehensive review","author":"Minh Dang","year":"2022","unstructured":"Dang Minh, H. Xiang Wang, Y. Fen Li, and Tan N. Nguyen. 2022. Explainable artificial intelligence: A comprehensive review. Artificial Intelligence Review 55 (2022), 1\u201366.","journal-title":"Artificial Intelligence Review"},{"issue":"3","key":"e_1_3_1_160_2","first-page":"1","article-title":"A multidisciplinary survey and framework for design and evaluation of Explainable AI systems","volume":"11","author":"Mohseni Sina","year":"2021","unstructured":"Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2021. A multidisciplinary survey and framework for design and evaluation of Explainable AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 11, 3\u20134 (2021), 1\u201345.","journal-title":"ACM Transactions on Interactive Intelligent Systems (TiiS)"},{"key":"e_1_3_1_161_2","doi-asserted-by":"crossref","first-page":"607","DOI":"10.1145\/3351095.3372850","volume-title":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","author":"Mothilal Ramaravind K.","year":"2020","unstructured":"Ramaravind K. Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607\u2013617."},{"key":"e_1_3_1_162_2","unstructured":"Shane T. Mueller Elizabeth S. Veinott Robert R. Hoffman Gary Klein Lamia Alam Tauseef Mamun and William J. Clancey. 2021. Principles of explanation in human-AI systems. arXiv:2102.04972. Retrieved from https:\/\/arxiv.org\/abs\/2102.04972"},{"key":"e_1_3_1_163_2","first-page":"1","article-title":"Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach","volume":"18","author":"Munn Zachary","year":"2018","unstructured":"Zachary Munn, Micah D. J. Peters, Cindy Stern, Catalin Tufanaru, Alexa McArthur, and Edoardo Aromataris. 2018. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology 18 (2018), 1\u20137.","journal-title":"BMC Medical Research Methodology"},{"issue":"4","key":"e_1_3_1_164_2","doi-asserted-by":"crossref","first-page":"950","DOI":"10.11124\/JBIES-21-00483","article-title":"What are scoping reviews? Providing a formal definition of scoping reviews as a type of evidence synthesis","volume":"20","author":"Munn Zachary","year":"2022","unstructured":"Zachary Munn, Danielle Pollock, Hanan Khalil, Lyndsay Alexander, Patricia Mclnerney, Christina M. Godfrey, Micah Peters, and Andrea C. Tricco. 2022. What are scoping reviews? Providing a formal definition of scoping reviews as a type of evidence synthesis. JBI Evidence Synthesis 20, 4 (2022), 950\u2013952.","journal-title":"JBI Evidence Synthesis"},{"issue":"13","key":"e_1_3_1_165_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3583558","article-title":"From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable AI","volume":"55","author":"Nauta Meike","year":"2023","unstructured":"Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, J\u00f6rg Schl\u00f6tterer, Maurice Van Keulen, and Christin Seifert. 2023. From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable AI. ACM Computational Surveys 55, 13s (2023), 1\u201342.","journal-title":"ACM Computational Surveys"},{"issue":"11","key":"e_1_3_1_166_2","doi-asserted-by":"crossref","first-page":"e22706","DOI":"10.2196\/22706","article-title":"A digitally competent health workforce: Scoping review of educational frameworks","volume":"22","author":"Nazeha Nuraini","year":"2020","unstructured":"Nuraini Nazeha, Deepali Pavagadhi, Bhone Myint Kyaw, Josip Car, Geronimo Jimenez, and Lorainne Tudor Car. 2020. A digitally competent health workforce: Scoping review of educational frameworks. Journal of Medical Internet Research 22, 11 (2020), e22706.","journal-title":"Journal of Medical Internet Research"},{"key":"e_1_3_1_167_2","doi-asserted-by":"crossref","first-page":"204","DOI":"10.1007\/978-3-319-91122-9_18","volume-title":"Proceedings of Engineering Psychology and Cognitive Ergonomics: 15th International Conference (EPCE \u201918), Held as Part of HCI International 2018","author":"Neerincx Mark A.","year":"2018","unstructured":"Mark A. Neerincx, Jasper van der Waa, Frank Kaptein, and Jurriaan van Diggelen. 2018. Using perceptual and cognitive explanations for enhanced human-agent team performance. In Proceedings of Engineering Psychology and Cognitive Ergonomics: 15th International Conference (EPCE \u201918), Held as Part of HCI International 2018. Springer, 204\u2013214."},{"key":"e_1_3_1_168_2","first-page":"1210","volume-title":"Proceedings of 2021 36th IEEE\/ACM International Conference on Automated Software Engineering (ASE)","author":"Nguyen Tien N.","year":"2021","unstructured":"Tien N. Nguyen and Raymond Choo. 2021. Human-in-the-loop XAI-enabled vulnerability detection, investigation, and mitigation. In Proceedings of 2021 36th IEEE\/ACM International Conference on Automated Software Engineering (ASE). IEEE, 1210\u20131212."},{"key":"e_1_3_1_169_2","unstructured":"Harsha Nori Samuel Jenkins Paul Koch and Rich Caruana. 2019. InterpreTML: A unified framework for machine learning interpretability. arXiv:1909.09223. Retrieved from https:\/\/arxiv.org\/abs\/1909.09223"},{"key":"e_1_3_1_170_2","first-page":"340","volume-title":"Proceedings of the 26th International Conference on Intelligent User Interfaces","author":"Nourani Mahsan","year":"2021","unstructured":"Mahsan Nourani, Chiradeep Roy, Jeremy E. Block, Donald R. Honeycutt, Tahrima Rahman, Eric Ragan, and Vibhav Gogate. 2021. Anchoring bias affects mental model formation and user reliance in Explainable AI systems. In Proceedings of the 26th International Conference on Intelligent User Interfaces, 340\u2013350."},{"issue":"4","key":"e_1_3_1_171_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3531066","article-title":"On the importance of user backgrounds and impressions: Lessons learned from interactive AI applications","volume":"12","author":"Nourani Mahsan","year":"2022","unstructured":"Mahsan Nourani, Chiradeep Roy, Jeremy E. Block, Donald R. Honeycutt, Tahrima Rahman, Eric D. Ragan, and Vibhav Gogate. 2022. On the importance of user backgrounds and impressions: Lessons learned from interactive AI applications. ACM Transactions on Interactive Intelligent Systems 12, 4 (2022), 1\u201329.","journal-title":"ACM Transactions on Interactive Intelligent Systems"},{"issue":"1","key":"e_1_3_1_172_2","doi-asserted-by":"crossref","first-page":"13","DOI":"10.1007\/s10676-022-09632-3","article-title":"Explanatory pragmatism: A context-sensitive framework for explainable medical AI","volume":"24","author":"Nyrup Rune","year":"2022","unstructured":"Rune Nyrup and Diana Robinson. 2022. Explanatory pragmatism: A context-sensitive framework for explainable medical AI. Ethics and Information Technology 24, 1 (2022), 13.","journal-title":"Ethics and Information Technology"},{"issue":"4","key":"e_1_3_1_173_2","doi-asserted-by":"crossref","first-page":"840","DOI":"10.1109\/TAI.2022.3227225","article-title":"User-centric explainability in healthcare: A knowledge-level perspective of informed machine learning","volume":"4","author":"Oberste Luis","year":"2022","unstructured":"Luis Oberste and Armin Heinzl. 2022. User-centric explainability in healthcare: A knowledge-level perspective of informed machine learning. IEEE Transactions on Artificial Intelligence 4, 4 (2022), 840\u2013857.","journal-title":"IEEE Transactions on Artificial Intelligence"},{"issue":"8","key":"e_1_3_1_174_2","doi-asserted-by":"crossref","first-page":"10142","DOI":"10.1109\/TITS.2021.3122865","article-title":"Explanations in autonomous driving: A survey","volume":"23","author":"Omeiza Daniel","year":"2021","unstructured":"Daniel Omeiza, Helena Webb, Marina Jirotka, and Lars Kunze. 2021. Explanations in autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems 23, 8 (2021), 10142\u201310162.","journal-title":"IEEE Transactions on Intelligent Transportation Systems"},{"issue":"1","key":"e_1_3_1_175_2","doi-asserted-by":"crossref","first-page":"78","DOI":"10.1080\/10447318.2021.1925436","article-title":"Counterfactual thinking: What theories do in design","volume":"38","author":"Oulasvirta Antti","year":"2022","unstructured":"Antti Oulasvirta and Kasper Hornb\u00e6k. 2022. Counterfactual thinking: What theories do in design. International Journal of Human\u2013Computer Interaction 38, 1 (2022), 78\u201392.","journal-title":"International Journal of Human\u2013Computer Interaction"},{"issue":"3","key":"e_1_3_1_176_2","doi-asserted-by":"crossref","first-page":"441","DOI":"10.1007\/s11023-019-09502-w","article-title":"The pragmatic turn in explainable artificial intelligence (XAI)","volume":"29","author":"P\u00e1ez Andr\u00e9s","year":"2019","unstructured":"Andr\u00e9s P\u00e1ez. 2019. The pragmatic turn in explainable artificial intelligence (XAI). Minds and Machines 29, 3 (2019), 441\u2013459.","journal-title":"Minds and Machines"},{"key":"e_1_3_1_177_2","first-page":"596","volume-title":"Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention","author":"Pahde Frederik","year":"2023","unstructured":"Frederik Pahde, Maximilian Dreyer, Wojciech Samek, and Sebastian Lapuschkin. 2023. Reveal to revise: An Explainable AI life cycle for iterative bias correction of deep models. In Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 596\u2013606."},{"key":"e_1_3_1_178_2","first-page":"3818","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Pahde Frederik","year":"2023","unstructured":"Frederik Pahde, Galip \u00dcmit Yolcu, Alexander Binder, Wojciech Samek, and Sebastian Lapuschkin. 2023. Optimizing explanations by network canonization and hyperparameter search. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 3818\u20133827."},{"key":"e_1_3_1_179_2","first-page":"3766","volume-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision","author":"Palacio Sebastian","year":"2021","unstructured":"Sebastian Palacio, Adriano Lucieri, Mohsin Munir, Sheraz Ahmed, J\u00f6rn Hees, and Andreas Dengel. 2021. XAI handbook: Towards a unified framework for Explainable AI. In Proceedings of the IEEE\/CVF International Conference on Computer Vision, 3766\u20133775."},{"key":"e_1_3_1_180_2","first-page":"9780","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence","volume":"33","author":"Pedreschi Dino","year":"2019","unstructured":"Dino Pedreschi, Fosca Giannotti, Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, and Franco Turini. 2019. Meaningful explanations of black box AI decision systems. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, 9780\u20139784."},{"issue":"5","key":"e_1_3_1_181_2","doi-asserted-by":"crossref","first-page":"61","DOI":"10.1007\/s10916-021-01736-5","article-title":"An explainable artificial intelligence framework for the deterioration risk prediction of hepatitis patients","volume":"45","author":"Peng Junfeng","year":"2021","unstructured":"Junfeng Peng, Kaiqiang Zou, Mi Zhou, Yi Teng, Xiongyong Zhu, Feifei Zhang, and Jun Xu. 2021. An explainable artificial intelligence framework for the deterioration risk prediction of hepatitis patients. Journal of Medical Systems 45, 5 (2021), 61.","journal-title":"Journal of Medical Systems"},{"key":"e_1_3_1_182_2","doi-asserted-by":"crossref","first-page":"193","DOI":"10.1016\/B978-155860808-5\/50008-3","article-title":"Distributed cognition","author":"Perry Mark","year":"2003","unstructured":"Mark Perry. 2003. Distributed cognition. In HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science. J. M. Carroll (Ed.), Elsevier, 193\u2013223.","journal-title":"HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science"},{"key":"e_1_3_1_183_2","doi-asserted-by":"crossref","first-page":"4389","DOI":"10.1145\/3511808.3557608","volume-title":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","author":"Prado-Romero Mario Alfonso","year":"2022","unstructured":"Mario Alfonso Prado-Romero and Giovanni Stilo. 2022. Gretel: Graph counterfactual explanation evaluation framework. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 4389\u20134393."},{"issue":"2","key":"e_1_3_1_184_2","doi-asserted-by":"crossref","first-page":"63","DOI":"10.1002\/isaf.1422","article-title":"Asking \u2018why\u2019 in AI: Explainability of intelligent systems\u2013perspectives and challenges","volume":"25","author":"Preece Alun","year":"2018","unstructured":"Alun Preece. 2018. Asking \u2018why\u2019 in AI: Explainability of intelligent systems\u2013perspectives and challenges. Intelligent Systems in Accounting, Finance and Management 25, 2 (2018), 63\u201372.","journal-title":"Intelligent Systems in Accounting, Finance and Management"},{"issue":"4","key":"e_1_3_1_185_2","first-page":"317","article-title":"Evaluating recommender systems from the user\u2019s perspective: Survey of the state of the art","volume":"22","author":"Pu Pearl","year":"2012","unstructured":"Pearl Pu, Li Chen, and Rong Hu. 2012. Evaluating recommender systems from the user\u2019s perspective: Survey of the state of the art. User Modeling and User-Adapted Interaction 22, 4 (2012), 317\u2013355.","journal-title":"User Modeling and User-Adapted Interaction"},{"key":"e_1_3_1_186_2","first-page":"1","article-title":"Detection of phonocardiogram event patterns in mitral valve prolapse: An automated clinically relevant explainable diagnostic framework","volume":"72","author":"Rajeshwari B. S.","year":"2023","unstructured":"B. S. Rajeshwari, Madhurima Patra, Aman Sinha, Arnab Sengupta, and Nirmalya Ghosh. 2023. Detection of phonocardiogram event patterns in mitral valve prolapse: An automated clinically relevant explainable diagnostic framework. IEEE Transactions on Instrumentation and Measurement 72 (2023), 1\u20139.","journal-title":"IEEE Transactions on Instrumentation and Measurement"},{"key":"e_1_3_1_187_2","doi-asserted-by":"crossref","first-page":"1305","DOI":"10.1145\/2702123.2702567","volume-title":"Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems","author":"Remy Christian","year":"2015","unstructured":"Christian Remy, Silke Gegenbauer, and Elaine M. Huang. 2015. Bridging the theory-practice gap: Lessons and challenges of applying the attachment framework for sustainable HCI design. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 1305\u20131314."},{"key":"e_1_3_1_188_2","volume-title":"Proceedings of CEUR Workshop","author":"Ribera Mireia","year":"2019","unstructured":"Mireia Ribera and Agata Lapedriza. 2019. Can we do better explanations? A proposal of user-centered Explainable AI. In Proceedings of CEUR Workshop."},{"key":"e_1_3_1_189_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.brat.2015.10.005"},{"key":"e_1_3_1_190_2","doi-asserted-by":"crossref","first-page":"110","DOI":"10.1145\/3503252.3531306","volume-title":"Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization","author":"Riveiro Maria","year":"2022","unstructured":"Maria Riveiro and Serge Thill. 2022. The challenges of providing explanations of AI systems when they do not behave like users expect. In Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, 110\u2013120."},{"key":"e_1_3_1_191_2","first-page":"1","volume-title":"2023 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB)","author":"Rizzo Matteo","year":"2023","unstructured":"Matteo Rizzo, Alberto Veneri, Andrea Albarelli, Claudio Lucchese, Marco Nobile, and Cristina Conati. 2023. A theoretical framework for AI models explainability with application in biomedicine. In 2023 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB). IEEE, 1\u20139."},{"key":"e_1_3_1_192_2","first-page":"1","volume-title":"Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents","author":"Robrecht Amelie Sophie","year":"2023","unstructured":"Amelie Sophie Robrecht, Markus Rothg\u00e4nger, and Stefan Ko. 2023. A study on the benefits and drawbacks of adaptivity in AI-generated explanations. In Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, 1\u20138."},{"issue":"1","key":"e_1_3_1_193_2","doi-asserted-by":"crossref","first-page":"87","DOI":"10.1002\/aris.1440380103","article-title":"New theoretical approaches for HCI","volume":"38","author":"Rogers Yvonne","year":"2004","unstructured":"Yvonne Rogers. 2004. New theoretical approaches for HCI. Annual Review of Information Science and Technology 38, 1 (2004), 87\u2013143.","journal-title":"Annual Review of Information Science and Technology"},{"key":"e_1_3_1_194_2","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-031-02197-8","volume-title":"HCI Theory: Classical, Modern, and Contemporary","author":"Rogers Yvonne","year":"2012","unstructured":"Yvonne Rogers. 2012. HCI Theory: Classical, Modern, and Contemporary, Vol. 14. Morgan & Claypool Publishers."},{"key":"e_1_3_1_195_2","volume-title":"Interaction Design: Beyond Human-Computer Interaction","author":"Rogers Yvonne","year":"2011","unstructured":"Yvonne Rogers, Helen Sharp, and Jenny Preece. 2011. Interaction Design: Beyond Human-Computer Interaction (3rd ed.). Wiley, Chichester, UK.","edition":"3"},{"issue":"3","key":"e_1_3_1_196_2","doi-asserted-by":"crossref","first-page":"717","DOI":"10.1109\/TCDS.2020.3044366","article-title":"Explanation as a social practice: Toward a conceptual framework for the social design of AI systems","volume":"13","author":"Rohlfing Katharina J.","year":"2020","unstructured":"Katharina J. Rohlfing, Philipp Cimiano, Ingrid Scharlau, Tobias Matzner, Heike M. Buhl, Hendrik Buschmeier, Elena Esposito, Angela Grimminger, Barbara Hammer, Reinhold H\u00e4b-Umbach, et al. 2020. Explanation as a social practice: Toward a conceptual framework for the social design of AI systems. IEEE Transactions on Cognitive and Developmental Systems 13, 3 (2020), 717\u2013728.","journal-title":"IEEE Transactions on Cognitive and Developmental Systems"},{"key":"e_1_3_1_197_2","doi-asserted-by":"crossref","first-page":"673","DOI":"10.1007\/s10458-019-09408-y","article-title":"Explainability in human\u2013agent systems","volume":"33","author":"Rosenfeld Avi","year":"2019","unstructured":"Avi Rosenfeld and Ariella Richardson. 2019. Explainability in human\u2013agent systems. Autonomous Agents and Multi-Agent Systems 33 (2019), 673\u2013705.","journal-title":"Autonomous Agents and Multi-Agent Systems"},{"issue":"5","key":"e_1_3_1_198_2","doi-asserted-by":"crossref","first-page":"206","DOI":"10.1038\/s42256-019-0048-x","article-title":"Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead","volume":"1","author":"Rudin Cynthia","year":"2019","unstructured":"Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1, 5 (2019), 206\u2013215.","journal-title":"Nature Machine Intelligence"},{"key":"e_1_3_1_199_2","doi-asserted-by":"crossref","first-page":"94","DOI":"10.1007\/978-3-030-51924-7_6","volume-title":"Explainable, Transparent Autonomous Agents and Multi-Agent Systems: Second International Workshop (EXTRAAMAS \u201920), Revised Selected Papers 2","author":"Sanneman Lindsay","year":"2020","unstructured":"Lindsay Sanneman and Julie A. Shah. 2020. A situation awareness-based framework for design and evaluation of Explainable AI. In Explainable, Transparent Autonomous Agents and Multi-Agent Systems: Second International Workshop (EXTRAAMAS \u201920), Revised Selected Papers 2. Springer, 94\u2013110."},{"key":"e_1_3_1_200_2","unstructured":"Advait Sarkar. 2022. Is Explainable AI a race against model complexity? arXiv:2205.10119. Retrieved from https:\/\/arxiv.org\/abs\/2205.10119"},{"issue":"2","key":"e_1_3_1_201_2","doi-asserted-by":"crossref","first-page":"443","DOI":"10.1007\/s00146-022-01422-1","article-title":"Minding the gap (s): Public perceptions of AI and socio-technical imaginaries","volume":"38","author":"Sartori Laura","year":"2023","unstructured":"Laura Sartori and Giulia Bocca. 2023. Minding the gap (s): Public perceptions of AI and socio-technical imaginaries. AI & Society 38, 2 (2023), 443\u2013458.","journal-title":"AI & Society"},{"key":"e_1_3_1_202_2","doi-asserted-by":"crossref","unstructured":"Johannes Schneider. 2024. Explainable generative AI (GenXAI): A survey conceptualization and research agenda. arXiv:2404.09554. Retrieved from https:\/\/arxiv.org\/abs\/2404.09554","DOI":"10.1007\/s10462-024-10916-x"},{"key":"e_1_3_1_203_2","first-page":"1","article-title":"A comprehensive taxonomy for explainable artificial intelligence: A systematic survey of surveys on methods and concepts","author":"Schwalbe Gesina","year":"2023","unstructured":"Gesina Schwalbe and Bettina Finzel. 2023. A comprehensive taxonomy for explainable artificial intelligence: A systematic survey of surveys on methods and concepts. Data Mining and Knowledge Discovery 38 (2023), 1\u201359.","journal-title":"Data Mining and Knowledge Discovery"},{"key":"e_1_3_1_204_2","first-page":"1","volume-title":"Proceedings of the CHI Conference on Human Factors in Computing Systems","author":"Sharma Nikhil","year":"2024","unstructured":"Nikhil Sharma, Q. Vera Liao, and Ziang Xiao. 2024. Generative echo chamber? Effect of LLM-Powered search systems on diverse information seeking. In Proceedings of the CHI Conference on Human Factors in Computing Systems, 1\u201317."},{"key":"e_1_3_1_205_2","unstructured":"Yonadav Shavit Sandhini Agarwal Miles Brundage Steven Adler Cullen O\u2019Keefe Rosie Campbell Teddy Lee Pamela Mishkin Tyna Eloundou Alan Hickey et al. 2023. Practices for Governing Agentic AI Systems. Research Paper OpenAI."},{"key":"e_1_3_1_206_2","doi-asserted-by":"crossref","DOI":"10.1093\/oso\/9780192845290.001.0001","volume-title":"Human-Centered AI","author":"Shneiderman Ben","year":"2022","unstructured":"Ben Shneiderman. 2022. Human-Centered AI. Oxford University Press."},{"key":"e_1_3_1_207_2","unstructured":"Chandan Singh Jeevana Priya Inala Michel Galley Rich Caruana and Jianfeng Gao. 2024. Rethinking interpretability in the era of large language models. arXiv:2402.01761. Retrieved from https:\/\/arxiv.org\/abs\/2402.01761"},{"issue":"4","key":"e_1_3_1_208_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3579363","article-title":"Directive explanations for actionable explainability in machine learning applications","volume":"13","author":"Singh Ronal","year":"2023","unstructured":"Ronal Singh, Tim Miller, Henrietta Lyons, Liz Sonenberg, Eduardo Velloso, Frank Vetere, Piers Howe, and Paul Dourish. 2023. Directive explanations for actionable explainability in machine learning applications. ACM Transactions on Interactive Intelligent Systems 13, 4 (2023), 1\u201326.","journal-title":"ACM Transactions on Interactive Intelligent Systems"},{"key":"e_1_3_1_209_2","doi-asserted-by":"crossref","first-page":"56","DOI":"10.1145\/3351095.3372870","volume-title":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","author":"Sokol Kacper","year":"2020","unstructured":"Kacper Sokol and Peter Flach. 2020. Explainability fact sheets: A framework for systematic assessment of explainable approaches. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 56\u201367."},{"issue":"1","key":"e_1_3_1_210_2","first-page":"1064","article-title":"Explainer: A visual analytics framework for interactive and explainable machine learning","volume":"26","author":"Spinner Thilo","year":"2019","unstructured":"Thilo Spinner, Udo Schlegel, Hanna Sch\u00e4fer, and Mennatallah El-Assady. 2019. Explainer: A visual analytics framework for interactive and explainable machine learning. IEEE Transactions on Visualization and Computer Graphics 26, 1 (2019), 1064\u20131074.","journal-title":"IEEE Transactions on Visualization and Computer Graphics"},{"key":"e_1_3_1_211_2","first-page":"4812","volume-title":"Proceedings of the 29th International Conference on International Joint Conferences on Artificial Intelligence","author":"Srinivasan Ramya","year":"2021","unstructured":"Ramya Srinivasan and Ajay Chander. 2021. Explanation perspectives from the cognitive sciences\u2014A survey. In Proceedings of the 29th International Conference on International Joint Conferences on Artificial Intelligence, 4812\u20134818."},{"issue":"3","key":"e_1_3_1_212_2","doi-asserted-by":"crossref","first-page":"387","DOI":"10.1177\/030631289019003001","article-title":"Institutional ecology, translations\u2019 and boundary objects: Amateurs and professionals in Berkeley\u2019s museum of vertebrate zoology, 1907-39","volume":"19","author":"Star Susan Leigh","year":"1989","unstructured":"Susan Leigh Star and James R. Griesemer. 1989. Institutional ecology, translations\u2019 and boundary objects: Amateurs and professionals in Berkeley\u2019s museum of vertebrate zoology, 1907-39. Social Studies of Science 19, 3 (1989), 387\u2013420.","journal-title":"Social Studies of Science"},{"issue":"2","key":"e_1_3_1_213_2","doi-asserted-by":"crossref","first-page":"221","DOI":"10.1038\/s42256-024-00976-7","article-title":"What large language models know and what people think they know","volume":"7","author":"Steyvers Mark","year":"2025","unstructured":"Mark Steyvers, Heliodoro Tejeda, Aakriti Kumar, Catarina Belem, Sheer Karny, Xinyue Hu, Lukas W. Mayer, and Padhraic Smyth. 2025. What large language models know and what people think they know. Nature Machine Intelligence 7, 2 (2025), 221\u2013231.","journal-title":"Nature Machine Intelligence"},{"issue":"2","key":"e_1_3_1_214_2","doi-asserted-by":"crossref","first-page":"95","DOI":"10.1080\/07370020903586696","article-title":"Concept-driven interaction design research","volume":"25","author":"Stolterman Erik","year":"2010","unstructured":"Erik Stolterman and Mikael Wiberg. 2010. Concept-driven interaction design research. Human\u2013Computer Interaction 25, 2 (2010), 95\u2013118.","journal-title":"Human\u2013Computer Interaction"},{"key":"e_1_3_1_215_2","first-page":"1","volume-title":"Proceedings of 2022 Mohammad Ali Jinnah University International Conference on Computing (MAJICC)","author":"Suffian Muhammad","year":"2022","unstructured":"Muhammad Suffian, Muhammad Yaseen Khan, and Alessandro Bogliolo. 2022. Towards human cognition level-based experiment design for counterfactual explanations. In Proceedings of 2022 Mohammad Ali Jinnah University International Conference on Computing (MAJICC). IEEE, 1\u20135."},{"key":"e_1_3_1_216_2","first-page":"1","volume-title":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","author":"Suresh Harini","year":"2021","unstructured":"Harini Suresh, Steven R. Gomez, Kevin K. Nam, and Arvind Satyanarayan. 2021. Beyond expertise and roles: A framework to characterize the stakeholders of interpretable machine learning and their needs. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1\u201316."},{"key":"e_1_3_1_217_2","volume-title":"Theory Building in Applied Disciplines","author":"Swanson Richard A.","year":"2013","unstructured":"Richard A. Swanson and Thomas J. Chermack. 2013. Theory Building in Applied Disciplines. Berrett-Koehler Publishers."},{"key":"e_1_3_1_218_2","doi-asserted-by":"publisher","unstructured":"Elham Tabassi. 2023. Artificial intelligence risk management framework (AI RMF 1.0). In NIST Trustworthy and Responsible AI. National Institute of Standards and Technology Gaithersburg MD [online]. Retrieved October 13 2025 from 10.6028\/NIST.AI.100-1 https:\/\/tsapps.nist.gov\/publication\/get_pdf.cfm?pub_id=936225","DOI":"10.6028\/NIST.AI.100-1,"},{"issue":"5","key":"e_1_3_1_219_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3398069","article-title":"Machine learning in mental health: A systematic review of the HCI literature to support the development of effective and implementable ML systems","volume":"27","author":"Thieme Anja","year":"2020","unstructured":"Anja Thieme, Danielle Belgrave, and Gavin Doherty. 2020. Machine learning in mental health: A systematic review of the HCI literature to support the development of effective and implementable ML systems. ACM Transactions on Computer-Human Interaction 27, 5 (2020), 1\u201353.","journal-title":"ACM Transactions on Computer-Human Interaction"},{"issue":"5","key":"e_1_3_1_220_2","doi-asserted-by":"crossref","first-page":"40","DOI":"10.1145\/3411286","article-title":"Interpretability as a dynamic of human-AI interaction","volume":"27","author":"Thieme Anja","year":"2020","unstructured":"Anja Thieme, Ed Cutrell, Cecily Morrison, Alex Taylor, and Abigail Sellen. 2020. Interpretability as a dynamic of human-AI interaction. Interactions 27, 5 (2020), 40\u201345.","journal-title":"Interactions"},{"issue":"2","key":"e_1_3_1_221_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3564752","article-title":"Designing human-centered AI for mental health: Developing clinically relevant applications for online CBT treatment","volume":"30","author":"Thieme Anja","year":"2023","unstructured":"Anja Thieme, Maryann Hanratty, Maria Lyons, Jorge Palacios, Rita Faia Marques, Cecily Morrison, and Gavin Doherty. 2023. Designing human-centered AI for mental health: Developing clinically relevant applications for online CBT treatment. ACM Transactions on Computer-Human Interaction 30, 2 (2023), 1\u201350.","journal-title":"ACM Transactions on Computer-Human Interaction"},{"key":"e_1_3_1_222_2","doi-asserted-by":"crossref","first-page":"187","DOI":"10.1007\/978-3-030-50788-6_14","volume-title":"Proceedings of Adaptive Instructional Systems: Second International Conference (AIS \u201920), Held as Part of the 22nd HCI International Conference (HCII \u201920)","author":"Thomson Robert","year":"2020","unstructured":"Robert Thomson and Jordan Richard Schoenherr. 2020. Knowledge-to-information translation training (KITT): An adaptive approach to explainable artificial intelligence. In Proceedings of Adaptive Instructional Systems: Second International Conference (AIS \u201920), Held as Part of the 22nd HCI International Conference (HCII \u201920). Springer, 187\u2013204."},{"key":"e_1_3_1_223_2","first-page":"1","volume-title":"Proceedings of the 8th International Conference on Knowledge Capture","author":"Tiddi Ilaria","year":"2015","unstructured":"Ilaria Tiddi, Mathieu d\u2019Aquin, and Enrico Motta. 2015. An ontology design pattern to define explanations. In Proceedings of the 8th International Conference on Knowledge Capture, 1\u20138."},{"key":"e_1_3_1_224_2","doi-asserted-by":"crossref","first-page":"103627","DOI":"10.1016\/j.artint.2021.103627","article-title":"Knowledge graphs as tools for explainable machine learning: A survey","volume":"302","author":"Tiddi Ilaria","year":"2022","unstructured":"Ilaria Tiddi and Stefan Schlobach. 2022. Knowledge graphs as tools for explainable machine learning: A survey. Artificial Intelligence 302 (2022), 103627.","journal-title":"Artificial Intelligence"},{"key":"e_1_3_1_225_2","unstructured":"Richard Tomsett Dave Braines Dan Harborne Alun Preece and Supriyo Chakraborty. 2018. Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. arXiv:1806.07552. Retrieved from https:\/\/arxiv.org\/abs\/1806.07552"},{"issue":"7","key":"e_1_3_1_226_2","doi-asserted-by":"crossref","first-page":"467","DOI":"10.7326\/M18-0850","article-title":"PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation","volume":"169","author":"Tricco Andrea C.","year":"2018","unstructured":"Andrea C. Tricco, Erin Lillie, Wasifa Zarin, Kelly K. O\u2019Brien, Heather Colquhoun, Danielle Levac, David Moher, Micah D. J. Peters, Tanya Horsley, Laura Weeks, et al. 2018. PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Annals of Internal Medicine 169, 7 (2018), 467\u2013473.","journal-title":"Annals of Internal Medicine"},{"key":"e_1_3_1_227_2","first-page":"1","article-title":"A scoping review on the conduct and reporting of scoping reviews","volume":"16","author":"Tricco Andrea C.","year":"2016","unstructured":"Andrea C. Tricco, Erin Lillie, Wasifa Zarin, Kelly O\u2019Brien, Heather Colquhoun, Monika Kastner, Danielle Levac, Carmen Ng, Jane Pearson Sharpe, Katherine Wilson, et al. 2016. A scoping review on the conduct and reporting of scoping reviews. BMC Medical Research Methodology 16 (2016), 1\u201310.","journal-title":"BMC Medical Research Methodology"},{"key":"e_1_3_1_228_2","doi-asserted-by":"crossref","first-page":"119","DOI":"10.1145\/3640544.3645253","volume-title":"Companion Proceedings of the 29th International Conference on Intelligent User Interfaces","author":"Turchi Tommaso","year":"2024","unstructured":"Tommaso Turchi, Alessio Malizia, Fabio Patern\u00f2, Simone Borsci, and Alan Chamberlain. 2024. Adaptive XAI: Towards intelligent interfaces for tailored AI explanations. In Companion Proceedings of the 29th International Conference on Intelligent User Interfaces, 119\u2013121."},{"issue":"1","key":"e_1_3_1_229_2","doi-asserted-by":"crossref","first-page":"389","DOI":"10.1186\/s12913-022-07702-2","article-title":"Concepts of health in different contexts: A scoping review","volume":"22","author":"Van Druten V. P.","year":"2022","unstructured":"V. P. Van Druten, E. A. Bartels, D. Van de Mheen, E. De Vries, A. P. M. Kerckhoffs, and L. M. W. Nahar-van Venrooij. 2022. Concepts of health in different contexts: A scoping review. BMC Health Services Research 22, 1 (2022), 389.","journal-title":"BMC Health Services Research"},{"key":"e_1_3_1_230_2","unstructured":"Koen van Turnhout Marjolein Jacobs Miriam Losse Thea van der Geest and Ren\u00e9 Bakker. 2019. A Practical Take on Theory in HCI. White Paper."},{"key":"e_1_3_1_231_2","volume-title":"The Nature of Statistical Learning Theory","author":"Vapnik Vladimir","year":"2013","unstructured":"Vladimir Vapnik. 2013. The Nature of Statistical Learning Theory. Springer Science & Business Media."},{"issue":"5","key":"e_1_3_1_232_2","doi-asserted-by":"crossref","first-page":"988","DOI":"10.1109\/72.788640","article-title":"An overview of statistical learning theory","volume":"10","author":"Vapnik Vladimir N.","year":"1999","unstructured":"Vladimir N. Vapnik. 1999. An overview of statistical learning theory. IEEE Transactions on Neural Networks 10, 5 (1999), 988\u2013999.","journal-title":"IEEE Transactions on Neural Networks"},{"key":"e_1_3_1_233_2","doi-asserted-by":"crossref","first-page":"989","DOI":"10.1097\/ACM.0000000000003075","article-title":"The distinctions between theory, theoretical framework, and conceptual framework","volume":"95","author":"Varpio Lara","year":"2020","unstructured":"Lara Varpio, Elise Paradis, Sebastian Uijtdehaage, and Meredith Young. 2020. The distinctions between theory, theoretical framework, and conceptual framework. Academic Medicine: Journal of the Association of American Medical Colleges 95, (2020), 989\u2013994.","journal-title":"Academic Medicine: Journal of the Association of American Medical Colleges"},{"key":"e_1_3_1_234_2","first-page":"3011","volume-title":"Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems","author":"Vasileiou Stylianos Loukas","year":"2023","unstructured":"Stylianos Loukas Vasileiou. 2023. Towards a logical account for human-aware explanation generation in model reconciliation problems. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, 3011\u20133013."},{"key":"e_1_3_1_235_2","doi-asserted-by":"crossref","first-page":"2091","DOI":"10.1145\/3025453.3026022","volume-title":"Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems","author":"Velt Raphael","year":"2017","unstructured":"Raphael Velt, Steve Benford, and Stuart Reeves. 2017. A survey of the trajectories conceptual framework: Investigating theory use in HCI. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2091\u20132105."},{"issue":"4","key":"e_1_3_1_236_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3386247","article-title":"Translations and boundaries in the gap between HCI theory and design practice","volume":"27","author":"Velt Raphael","year":"2020","unstructured":"Raphael Velt, Steve Benford, and Stuart Reeves. 2020. Translations and boundaries in the gap between HCI theory and design practice. ACM Transactions on Computer-Human Interaction 27, 4 (2020), 1\u201328.","journal-title":"ACM Transactions on Computer-Human Interaction"},{"key":"e_1_3_1_237_2","doi-asserted-by":"crossref","first-page":"119","DOI":"10.1007\/978-3-030-82017-6_8","volume-title":"Proceedings of International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems","author":"Verhagen Ruben S.","year":"2021","unstructured":"Ruben S. Verhagen, Mark A. Neerincx, and Myrthe L. Tielman. 2021. A two-dimensional explanation framework to classify AI as incomprehensible, interpretable, or understandable. In Proceedings of International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems. Springer, 119\u2013138."},{"key":"e_1_3_1_238_2","doi-asserted-by":"crossref","first-page":"89","DOI":"10.1016\/j.inffus.2021.05.009","article-title":"Notions of explainability and evaluation approaches for explainable artificial intelligence","volume":"76","author":"Vilone Giulia","year":"2021","unstructured":"Giulia Vilone and Luca Longo. 2021. Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion 76 (2021), 89\u2013106.","journal-title":"Information Fusion"},{"key":"e_1_3_1_239_2","first-page":"1","volume-title":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","author":"Wang Danding","year":"2019","unstructured":"Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing theory-driven user-centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1\u201315."},{"issue":"137","key":"e_1_3_1_240_2","first-page":"1","article-title":"Hybrid predictive models: When an interpretable model collaborates with a black-box model","volume":"22","author":"Wang Tong","year":"2021","unstructured":"Tong Wang and Qihang Lin. 2021. Hybrid predictive models: When an interpretable model collaborates with a black-box model. Journal of Machine Learning Research 22, 137 (2021), 1\u201338.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_1_241_2","first-page":"1382","volume-title":"Uncertainty in Artificial Intelligence","author":"Watson David S.","year":"2021","unstructured":"David S. Watson, Limor Gultchin, Ankur Taly, and Luciano Floridi. 2021. Local explanations via necessity and sufficiency: Unifying theory and practice. In Uncertainty in Artificial Intelligence. PMLR, 1382\u20131392."},{"key":"e_1_3_1_242_2","unstructured":"Laura Weidinger Maribeth Rauh Nahema Marchal Arianna Manzini Lisa Anne Hendricks Juan Mateos-Garcia Stevie Bergman Jackie Kay Conor Griffin Ben Bariach et al. 2023. Sociotechnical safety evaluation of generative AI systems. arXiv:2310.11986. Retrieved from https:\/\/arxiv.org\/abs\/2310.11986"},{"key":"e_1_3_1_243_2","doi-asserted-by":"crossref","first-page":"205","DOI":"10.1007\/978-3-030-30391-4_12","volume-title":"Explainable, Transparent Autonomous Agents and Multi-Agent Systems: First International Workshop (EXTRAAMAS \u201919), Revised Selected Papers 1","author":"Westberg Marcus","year":"2019","unstructured":"Marcus Westberg, Amber Zelvelder, and Amro Najjar. 2019. A historical perspective on cognitive science and its influence on XAI research. In Explainable, Transparent Autonomous Agents and Multi-Agent Systems: First International Workshop (EXTRAAMAS \u201919), Revised Selected Papers 1. Springer, 205\u2013219."},{"issue":"2","key":"e_1_3_1_244_2","doi-asserted-by":"crossref","first-page":"100072","DOI":"10.1016\/j.chbah.2024.100072","article-title":"Exploring people\u2019s perceptions of LLM-generated advice","volume":"2","author":"Wester Joel","year":"2024","unstructured":"Joel Wester, Sander De Jong, Henning Pohl, and Niels Van Berkel. 2024. Exploring people\u2019s perceptions of LLM-generated advice. Computers in Human Behavior: Artificial Humans 2, 2 (2024), 100072.","journal-title":"Computers in Human Behavior: Artificial Humans"},{"key":"e_1_3_1_245_2","first-page":"1","article-title":"Disseminating research findings: What should researchers do? A systematic scoping review of conceptual frameworks","volume":"5","author":"Wilson Paul M.","year":"2010","unstructured":"Paul M. Wilson, Mark Petticrew, Mike W. Calnan, and Irwin Nazareth. 2010. Disseminating research findings: What should researchers do? A systematic scoping review of conceptual frameworks. Implementation Science 5 (2010), 1\u201316.","journal-title":"Implementation Science"},{"key":"e_1_3_1_246_2","article-title":"IEEE standard for transparency of autonomous systems","author":"Winfield Alan","year":"2022","unstructured":"Alan Winfield, Eleanor Watson, Takashi Egawa, Emily Barwell, Iain Barclay, Serena Booth, Louise A. Dennis, Helen Hastie, Ali Hossaini, Naomi Jacobs, et al. 2022. IEEE standard for transparency of autonomous systems. Institute of Electrical and Electronics Engineers (IEEE) (2022).","journal-title":"Institute of Electrical and Electronics Engineers (IEEE)"},{"issue":"3","key":"e_1_3_1_247_2","doi-asserted-by":"crossref","first-page":"38","DOI":"10.1145\/2907069","volume":"23","author":"Wobbrock Jacob O.","year":"2016","unstructured":"Jacob O. Wobbrock and Julie A. Kientz. 2016. Research contributions in human-computer interaction. Interactions 23, 3 (2016), 38\u201344.","journal-title":"Interactions"},{"key":"e_1_3_1_248_2","first-page":"57","volume-title":"Proceedings of International Conference on Artificial Intelligence in Medicine","author":"Van Woensel William","year":"2022","unstructured":"William Van Woensel, Floriano Scioscia, Giuseppe Loseto, Oshani Seneviratne, Evan Patton, Samina Abidi, and Lalana Kagal. 2022. Explainable clinical decision support: Towards patient-facing explanations for education and long-term behavior change. In Proceedings of International Conference on Artificial Intelligence in Medicine. Springer, 57\u201362."},{"key":"e_1_3_1_249_2","doi-asserted-by":"crossref","first-page":"103839","DOI":"10.1016\/j.artint.2022.103839","article-title":"Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making","volume":"316","author":"Wysocki Oskar","year":"2023","unstructured":"Oskar Wysocki, Jessica Katharine Davies, Markel Vigo, Anne Caroline Armstrong, D\u00f3nal Landers, Rebecca Lee, and Andr\u00e9 Freitas. 2023. Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making. Artificial Intelligence 316 (2023), 103839.","journal-title":"Artificial Intelligence"},{"key":"e_1_3_1_250_2","first-page":"563","volume-title":"Proceedings of Natural Language Processing and Chinese Computing: 8th CCF International Conference (NLPCC \u201919)","author":"Xu Feiyu","year":"2019","unstructured":"Feiyu Xu, Hans Uszkoreit, Yangzhou Du, Wei Fan, Dongyan Zhao, and Jun Zhu. 2019. Explainable AI: A brief survey on history, research areas, approaches and challenges. In Proceedings of Natural Language Processing and Chinese Computing: 8th CCF International Conference (NLPCC \u201919), Part II. Springer, 563\u2013574."},{"key":"e_1_3_1_251_2","first-page":"1","volume-title":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","author":"Xu Xuhai","year":"2023","unstructured":"Xuhai Xu, Anna Yu, Tanya R. Jonker, Kashyap Todi, Feiyu Lu, Xun Qian, Jo\u00e3o Marcelo Evangelista Belo, Tianyi Wang, Michelle Li, Aran Mun, et al. 2023. XAIR: A framework of Explainable AI in Augmented Reality. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1\u201330."},{"issue":"4","key":"e_1_3_1_252_2","doi-asserted-by":"crossref","first-page":"e37","DOI":"10.1002\/ail2.37","article-title":"Abstraction, validation, and generalization for explainable artificial intelligence","volume":"2","author":"Yang Scott Cheng-Hsin","year":"2021","unstructured":"Scott Cheng-Hsin Yang, Tomas Folke, and Patrick Shafto. 2021. Abstraction, validation, and generalization for explainable artificial intelligence. Applied AI Letters 2, 4 (2021), e37.","journal-title":"Applied AI Letters"},{"key":"e_1_3_1_253_2","unstructured":"Asaf Yehudai Lilach Eden Alan Li Guy Uziel Yilun Zhao Roy Bar-Haim Arman Cohan and Michal Shmueli-Scheuer. 2025. Survey on evaluation of LLM-based agents. arXiv:2503.16416. Retrieved from https:\/\/arxiv.org\/abs\/2503.16416"},{"issue":"2","key":"e_1_3_1_254_2","doi-asserted-by":"crossref","first-page":"265","DOI":"10.1007\/s13347-019-00382-7","article-title":"Solving the black box problem: A normative framework for explainable artificial intelligence","volume":"34","author":"Zednik Carlos","year":"2021","unstructured":"Carlos Zednik. 2021. Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & Technology 34, 2 (2021), 265\u2013288.","journal-title":"Philosophy & Technology"},{"issue":"1","key":"e_1_3_1_255_2","doi-asserted-by":"crossref","first-page":"364","DOI":"10.1109\/TVCG.2018.2864499","article-title":"Manifold: A model-agnostic framework for interpretation and diagnosis of machine learning models","volume":"25","author":"Zhang Jiawei","year":"2018","unstructured":"Jiawei Zhang, Yang Wang, Piero Molino, Lezhi Li, and David S. Ebert. 2018. Manifold: A model-agnostic framework for interpretation and diagnosis of machine learning models. IEEE Transactions on Visualization and Computer Graphics 25, 1 (2018), 364\u2013373.","journal-title":"IEEE Transactions on Visualization and Computer Graphics"},{"issue":"2","key":"e_1_3_1_256_2","doi-asserted-by":"crossref","first-page":"47","DOI":"10.26599\/IJCS.2022.9100034","article-title":"EID: Facilitating Explainable AI design discussions in team-based settings","volume":"7","author":"Zhang Jiehuang","year":"2023","unstructured":"Jiehuang Zhang and Han Yu. 2023. EID: Facilitating Explainable AI design discussions in team-based settings. International Journal of Crowd Science 7, 2 (2023), 47\u201354.","journal-title":"International Journal of Crowd Science"},{"key":"e_1_3_1_257_2","first-page":"1","volume-title":"Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems","author":"Zhang Wencan","year":"2022","unstructured":"Wencan Zhang and Brian Y. Lim. 2022. Towards relatable Explainable AI with the perceptual process. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1\u201324."},{"key":"e_1_3_1_258_2","first-page":"108","volume-title":"Proceedings of International Conference on Human-Computer Interaction","author":"Zhang Yiwen","year":"2022","unstructured":"Yiwen Zhang, Weiwei Guo, Cheng Chi, Lu Hou, and Xiaohua Sun. 2022. Towards scenario-based and question-driven explanations in autonomous vehicles. In Proceedings of International Conference on Human-Computer Interaction. Springer, 108\u2013120."},{"key":"e_1_3_1_259_2","doi-asserted-by":"crossref","first-page":"1747","DOI":"10.1145\/3318464.3389720","volume-title":"Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data","author":"Zheng Kaiping","year":"2020","unstructured":"Kaiping Zheng, Shaofeng Cai, Horng Ruey Chua, Wei Wang, Kee Yuan Ngiam, and Beng Chin Ooi. 2020. Tracer: A framework for facilitating accurate and interpretable analytics for high stakes applications. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, 1747\u20131763."},{"key":"e_1_3_1_260_2","first-page":"1","volume-title":"2018 IEEE Conference on Computational Intelligence and Games (CIG)","author":"Zhu Jichen","year":"2018","unstructured":"Jichen Zhu, Antonios Liapis, Sebastian Risi, Rafael Bidarra, and G. Michael Youngblood. 2018. Explainable AI for designers: A human-centered perspective on mixed-initiative co-creation. In 2018 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 1\u20138."}],"container-title":["ACM Transactions on Computer-Human Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3769678","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,9]],"date-time":"2025-12-09T13:58:12Z","timestamp":1765288692000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3769678"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,9]]},"references-count":259,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,12,31]]}},"alternative-id":["10.1145\/3769678"],"URL":"https:\/\/doi.org\/10.1145\/3769678","relation":{},"ISSN":["1073-0516","1557-7325"],"issn-type":[{"value":"1073-0516","type":"print"},{"value":"1557-7325","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,12,9]]},"assertion":[{"value":"2024-11-13","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-09-15","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-12-09","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}