{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,16]],"date-time":"2026-04-16T19:49:07Z","timestamp":1776368947549,"version":"3.51.2"},"reference-count":113,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2024,2,5]],"date-time":"2024-02-05T00:00:00Z","timestamp":1707091200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Interact. Intell. Syst."],"published-print":{"date-parts":[[2024,3,31]]},"abstract":"<jats:p>Explanation of artificial intelligence (AI) decision-making has become an important research area in human\u2013computer interaction (HCI) and computer-supported teamwork research. While plenty of research has investigated AI explanations with an intent to improve AI transparency and human trust in AI, how AI explanations function in teaming environments remains unclear. Given that a major benefit of AI giving explanations is to increase human trust understanding how AI explanations impact human trust is crucial to effective human-AI teamwork. An online experiment was conducted with 156 participants to explore this question by examining how a teammate\u2019s explanations impact the perceived trust of the teammate and the effectiveness of the team and how these impacts vary based on whether the teammate is a human or an AI. This study shows that explanations facilitate trust in AI teammates when explaining why AI disobeyed humans\u2019 orders but hindered trust when explaining why an AI lied to humans. In addition, participants\u2019 personal characteristics (e.g., their gender and the individual\u2019s ethical framework) impacted their perceptions of AI teammates both directly and indirectly in different scenarios. Our study contributes to interactive intelligent systems and HCI by shedding light on how an AI teammate\u2019s actions and corresponding explanations are perceived by humans while identifying factors that impact trust and perceived effectiveness. This work provides an initial understanding of AI explanations in human-AI teams, which can be used for future research to build upon in exploring AI explanation implementation in collaborative environments.<\/jats:p>","DOI":"10.1145\/3635474","type":"journal-article","created":{"date-parts":[[2023,12,2]],"date-time":"2023-12-02T11:54:35Z","timestamp":1701518075000},"page":"1-23","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":22,"title":["I Know This Looks Bad, But I Can Explain: Understanding When AI Should Explain Actions In Human-AI Teams"],"prefix":"10.1145","volume":"14","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0902-1364","authenticated-orcid":false,"given":"Rui","family":"Zhang","sequence":"first","affiliation":[{"name":"Clemson University, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5448-2610","authenticated-orcid":false,"given":"Christopher","family":"Flathmann","sequence":"additional","affiliation":[{"name":"Clemson University, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6056-4778","authenticated-orcid":false,"given":"Geoff","family":"Musick","sequence":"additional","affiliation":[{"name":"Clemson University, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3704-697X","authenticated-orcid":false,"given":"Beau","family":"Schelble","sequence":"additional","affiliation":[{"name":"Clemson University, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9143-2460","authenticated-orcid":false,"given":"Nathan J.","family":"McNeese","sequence":"additional","affiliation":[{"name":"Clemson University, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1341-0669","authenticated-orcid":false,"given":"Bart","family":"Knijnenburg","sequence":"additional","affiliation":[{"name":"Clemson University, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2541-888X","authenticated-orcid":false,"given":"Wen","family":"Duan","sequence":"additional","affiliation":[{"name":"Clemson University, USA"}]}],"member":"320","published-online":{"date-parts":[[2024,2,5]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3174156"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2870052"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2019.12.012"},{"key":"e_1_3_1_5_2","doi-asserted-by":"crossref","unstructured":"Vijay Arya Rachel KE Bellamy Pin-Yu Chen Amit Dhurandhar Michael Hind Samuel C. Hoffman Stephanie Houde Q. Vera Liao Ronny Luss Aleksandra Mojsilovi\u0107 et\u00a0al. 2019. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012 (2019).","DOI":"10.1145\/3351095.3375667"},{"key":"e_1_3_1_6_2","volume-title":"Factorial Survey Experiments","author":"Auspurg Katrin","year":"2014","unstructured":"Katrin Auspurg and Thomas Hinz. 2014. Factorial Survey Experiments. Vol. 175. Sage Publications."},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1145\/3181671"},{"key":"e_1_3_1_8_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33012429"},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445717"},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1111\/ajps.12081"},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.chb.2015.08.031"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1145\/3377325.3377498"},{"issue":"4","key":"e_1_3_1_13_2","doi-asserted-by":"crossref","first-page":"271","DOI":"10.1016\/j.aucc.2012.07.002","article-title":"Sample size: How many is enough?","volume":"25","author":"Burmeister Elizabeth","year":"2012","unstructured":"Elizabeth Burmeister and Leanne M. Aitken. 2012. Sample size: How many is enough? Australian Critical Care 25, 4 (2012), 271\u2013274.","journal-title":"Australian Critical Care"},{"key":"e_1_3_1_14_2","first-page":"1","volume-title":"Proceedings of the 25th International Conference on Intelligent User Interfaces","author":"Burnett Margaret","year":"2020","unstructured":"Margaret Burnett. 2020. Explaining AI: Fairly? well?. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 1\u20132."},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/3359206"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1145\/3514257"},{"key":"e_1_3_1_17_2","first-page":"1466","volume-title":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","volume":"63","author":"Canonico Lorenzo Barberis","year":"2019","unstructured":"Lorenzo Barberis Canonico, Christopher Flathmann, and Nathan McNeese. 2019. Collectively intelligent teams: Integrating team cognition, collective intelligence, and AI for future Teaming. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 63. SAGE Publications Sage CA: Los Angeles, CA, 1466\u20131470."},{"key":"e_1_3_1_18_2","unstructured":"Tathagata Chakraborti and Subbarao Kambhampati. 2018. Algorithms for the greater good! on mental modeling and acceptable symbiosis in human-ai collaboration. arXiv preprint arXiv:1801.09854 (2018)."},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1037\/0021-9010.86.3.481"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.apergo.2015.07.012"},{"key":"e_1_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1177\/0018720811435843"},{"key":"e_1_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1080\/1463922X.2017.1315750"},{"key":"e_1_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.1080\/1463922X.2017.1397229"},{"key":"e_1_3_1_24_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.future.2018.01.055"},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445351"},{"key":"e_1_3_1_26_2","volume-title":"Synthetic Teammates as Team Players: Coordination of Human and Synthetic Teammates","author":"Cooke Nancy J.","year":"2016","unstructured":"Nancy J. Cooke, Mustafa Demir, and Nathan McNeese. 2016. Synthetic Teammates as Team Players: Coordination of Human and Synthetic Teammates. Technical Report. Cognitive Engineering Research Institute Mesa United States."},{"key":"e_1_3_1_27_2","first-page":"28","volume-title":"Proceedings of the 2016 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA)","author":"Demir Mustafa","year":"2016","unstructured":"Mustafa Demir, Nathan J. McNeese, and Nancy J. Cooke. 2016. Team communication behaviors of the human-automation teaming. In Proceedings of the 2016 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA). IEEE, 28\u201334."},{"key":"e_1_3_1_28_2","first-page":"0210","volume-title":"Proceedings of the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO)","author":"Do\u0161ilovi\u0107 Filip Karlo","year":"2018","unstructured":"Filip Karlo Do\u0161ilovi\u0107, Mario Br\u010di\u0107, and Nikica Hlupi\u0107. 2018. Explainable artificial intelligence: A survey. In Proceedings of the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). IEEE, 0210\u20130215."},{"key":"e_1_3_1_29_2","doi-asserted-by":"crossref","DOI":"10.4324\/9780203149256","volume-title":"Consequentialism","author":"Driver Julia","year":"2011","unstructured":"Julia Driver. 2011. Consequentialism. Routledge."},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445188"},{"key":"e_1_3_1_31_2","unstructured":"Upol Ehsan Samir Passi Q. Vera Liao Larry Chan I Lee Michael Muller Mark O. Riedl et\u00a0al. 2021. The who in explainable ai: How ai background shapes perceptions of ai explanations. arXiv preprint arXiv:2107.13509 (2021)."},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.1177\/0018720816681350"},{"key":"e_1_3_1_33_2","first-page":"322","volume-title":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","volume":"63","author":"Ezer Neta","year":"2019","unstructured":"Neta Ezer, Sylvain Bruni, Yang Cai, Sam J. Hepenstal, Christopher A. Miller, and Dylan D. Schmorrow. 2019. Trust engineering for Human-AI teams. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 63. SAGE Publications Sage CA: Los Angeles, CA, 322\u2013326."},{"issue":"3","key":"e_1_3_1_34_2","doi-asserted-by":"crossref","first-page":"195","DOI":"10.1080\/07370020903586720","article-title":"NDM-based cognitive agents for supporting decision-making teams","volume":"25","author":"Fan Xiaocong","year":"2010","unstructured":"Xiaocong Fan, Michael McNeese, and John Yen. 2010. NDM-based cognitive agents for supporting decision-making teams. Human\u2013Computer Interaction 25, 3 (2010), 195\u2013234.","journal-title":"Human\u2013Computer Interaction"},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.1145\/1473018.1473028"},{"issue":"2","key":"e_1_3_1_36_2","doi-asserted-by":"crossref","first-page":"354","DOI":"10.1109\/TSMCB.2010.2053705","article-title":"Modeling cognitive loads for evolving shared mental models in human\u2013agent collaboration","volume":"41","author":"Fan Xiaocong","year":"2010","unstructured":"Xiaocong Fan and John Yen. 2010. Modeling cognitive loads for evolving shared mental models in human\u2013agent collaboration. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 41, 2 (2010), 354\u2013367.","journal-title":"IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)"},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.1145\/3301275.3302265"},{"key":"e_1_3_1_38_2","first-page":"1","article-title":"In AI we trust incrementally: A multi-layer model of trust to analyze human-artificial intelligence interactions","author":"Ferrario Andrea","year":"2019","unstructured":"Andrea Ferrario, Michele Loi, and Eleonora Vigan\u00f2. 2019. In AI we trust incrementally: A multi-layer model of trust to analyze human-artificial intelligence interactions. Philosophy & Technology (2019), 1\u201317.","journal-title":"Philosophy & Technology"},{"key":"e_1_3_1_39_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1080\/07370024.2023.2189595","article-title":"Understanding the impact and design of AI teammate etiquette","author":"Flathmann Christopher","year":"2023","unstructured":"Christopher Flathmann, Nathan J. McNeese, Beau Schelble, Bart Knijnenburg, and Guo Freeman. 2023. Understanding the impact and design of AI teammate etiquette. Human\u2013Computer Interaction 0, 0 (2023), 1\u201328.","journal-title":"Human\u2013Computer Interaction"},{"key":"e_1_3_1_40_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijhcs.2023.103061"},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.1145\/3461702.3462573"},{"key":"e_1_3_1_42_2","doi-asserted-by":"publisher","DOI":"10.1109\/MCG.2008.79"},{"issue":"2","key":"e_1_3_1_43_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3479522","article-title":"Difficulties of measuring culture in privacy studies","volume":"5","author":"Anaraky Reza Ghaiumy","year":"2021","unstructured":"Reza Ghaiumy Anaraky, Yao Li, and Bart Knijnenburg. 2021. Difficulties of measuring culture in privacy studies. Proceedings of the ACM on Human\u2013Computer Interaction 5, CSCW2 (2021), 1\u201326.","journal-title":"Proceedings of the ACM on Human\u2013Computer Interaction"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.5465\/annals.2018.0057"},{"key":"e_1_3_1_45_2","doi-asserted-by":"publisher","DOI":"10.3390\/s21165526"},{"key":"e_1_3_1_46_2","unstructured":"David Gunning. 2017. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA) nd Web 2 2 (2017)."},{"key":"e_1_3_1_47_2","doi-asserted-by":"publisher","DOI":"10.1177\/0018720814547570"},{"key":"e_1_3_1_48_2","volume-title":"Structural Equation Modeling: Concepts, Issues, and Applications","author":"Hoyle Rick H.","year":"1995","unstructured":"Rick H. Hoyle. 1995. Structural Equation Modeling: Concepts, Issues, and Applications. Sage."},{"key":"e_1_3_1_49_2","first-page":"282","volume-title":"Proceedings of the International Conference on Applied Human Factors and Ergonomics","author":"Huang Hsiao-Ying","year":"2017","unstructured":"Hsiao-Ying Huang and Masooda Bashir. 2017. Users\u2019 trust in automation: A cultural perspective. In Proceedings of the International Conference on Applied Human Factors and Ergonomics. Springer, 282\u2013289."},{"key":"e_1_3_1_50_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.chb.2021.106881"},{"key":"e_1_3_1_51_2","doi-asserted-by":"crossref","first-page":"624","DOI":"10.1145\/3442188.3445923","volume-title":"Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency","author":"Jacovi Alon","year":"2021","unstructured":"Alon Jacovi, Ana Marasovi\u0107, Tim Miller, and Yoav Goldberg. 2021. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in ai. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 624\u2013635."},{"key":"e_1_3_1_52_2","first-page":"001872082110473","article-title":"The impact of training on human\u2013autonomy team communications and trust calibration","author":"Johnson Craig J.","year":"2021","unstructured":"Craig J. Johnson, Mustafa Demir, Nathan J. McNeese, Jamie C. Gorman, Alexandra T. Wolff, and Nancy J. Cooke. 2021. The impact of training on human\u2013autonomy team communications and trust calibration. Human Factors (2021), 00187208211047323.","journal-title":"Human Factors"},{"key":"e_1_3_1_53_2","doi-asserted-by":"publisher","DOI":"10.5465\/amr.1998.926625"},{"key":"e_1_3_1_54_2","volume-title":"LISREL 8: Structural Equation Modeling with the SIMPLIS Command Language","author":"J\u00f6reskog Karl G.","year":"1993","unstructured":"Karl G. J\u00f6reskog and Dag S\u00f6rbom. 1993. LISREL 8: Structural Equation Modeling with the SIMPLIS Command Language. Scientific software international."},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287590"},{"key":"e_1_3_1_56_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.leaqua.2019.101377"},{"key":"e_1_3_1_57_2","doi-asserted-by":"publisher","DOI":"10.1518\/hfes.46.1.50.30392"},{"key":"e_1_3_1_58_2","doi-asserted-by":"crossref","unstructured":"Tianyi Li Mihaela Vorvoreanu Derek DeBellis and Saleema Amershi. 2023. Assessing human-ai interaction early through factorial surveys: A study on the guidelines for human-ai interaction. ACM Transactions on Computer-Human Interaction 30 5 (2023) 1\u201345.","DOI":"10.1145\/3511605"},{"key":"e_1_3_1_59_2","unstructured":"Enrico Liscio Michiel van der Meer Luciano Cavalcante Siebert Catholijn M. Jonker Niek Mouter and Pradeep K. Murukannaiah. 2021. Axies: Identifying and Evaluating Context-Specific Values. In AAMAS. 799\u2013808."},{"key":"e_1_3_1_60_2","first-page":"1","article-title":"The complex relationship of AI ethics and trust in human\u2013AI teaming: insights from advanced real-world subject matter experts","author":"Lopez Jeremy","year":"2023","unstructured":"Jeremy Lopez, Claire Textor, Caitlin Lancaster, Beau Schelble, Guo Freeman, Rui Zhang, Nathan McNeese, and Richard Pak. 2023. The complex relationship of AI ethics and trust in human\u2013AI teaming: insights from advanced real-world subject matter experts. AI and Ethics (2023), 1\u201321.","journal-title":"AI and Ethics"},{"issue":"1","key":"e_1_3_1_61_2","doi-asserted-by":"crossref","first-page":"115","DOI":"10.1007\/s10551-018-3937-8","article-title":"The ethical standards of judgment questionnaire: Development and validation of independent measures of formalism and consequentialism","volume":"161","author":"Love Ed","year":"2020","unstructured":"Ed Love, Tara Ceranic Salinas, and Jeff D. Rotman. 2020. The ethical standards of judgment questionnaire: Development and validation of independent measures of formalism and consequentialism. Journal of Business Ethics 161, 1 (2020), 115\u2013132.","journal-title":"Journal of Business Ethics"},{"key":"e_1_3_1_62_2","first-page":"161","volume-title":"Proceedings of the International Conference on Human-Computer Interaction","author":"Lucero Crisrael","year":"2020","unstructured":"Crisrael Lucero, Christianne Izumigawa, Kurt Frederiksen, Lena Nans, Rebecca Iden, and Douglas S. Lange. 2020. Human-autonomy teaming and explainable AI capabilities in RTS games. In Proceedings of the International Conference on Human-Computer Interaction. Springer, 161\u2013171."},{"issue":"5","key":"e_1_3_1_63_2","doi-asserted-by":"crossref","first-page":"1553","DOI":"10.1177\/0149206314556656","article-title":"How contracts influence trust and distrust","volume":"43","author":"Lumineau Fabrice","year":"2017","unstructured":"Fabrice Lumineau. 2017. How contracts influence trust and distrust. Journal of Management 43, 5 (2017), 1553\u20131577.","journal-title":"Journal of Management"},{"key":"e_1_3_1_64_2","article-title":"Explainable artificial intelligence (XAI) to enhance trust management in intrusion detection systems using decision tree model","volume":"2021","author":"Mahbooba Basim","year":"2021","unstructured":"Basim Mahbooba, Mohan Timilsina, Radhya Sahal, and Martin Serrano. 2021. Explainable artificial intelligence (XAI) to enhance trust management in intrusion detection systems using decision tree model. Complexity 2021 (2021), 6634811.","journal-title":"Complexity"},{"key":"e_1_3_1_65_2","doi-asserted-by":"publisher","DOI":"10.1145\/3237189"},{"key":"e_1_3_1_66_2","first-page":"63","volume-title":"Proceedings of the First International Workshop on New Foundations for Human-Centered AI (NeHuAI), Santiago de Compostella, Spain","author":"Maxwell Winston","year":"2020","unstructured":"Winston Maxwell, Val\u00e9rie Beaudouin, Isabelle Bloch, David Bounie, St\u00e9phan Cl\u00e9men\u00e7on, Florence d\u2019Alch\u00e9 Buc, James Eagan, Pavlo Mozharovskyi, and Jayneel Parekh. 2020. Identifying the\u2019Right\u2019Level of explanation in a given situation. In Proceedings of the First International Workshop on New Foundations for Human-Centered AI (NeHuAI), Santiago de Compostella, Spain. 63."},{"issue":"1","key":"e_1_3_1_67_2","doi-asserted-by":"crossref","first-page":"37","DOI":"10.1111\/1467-9329.00050","article-title":"On defending deontology","volume":"11","author":"McNaughton David","year":"1998","unstructured":"David McNaughton and Piers Rawling. 1998. On defending deontology. Ratio 11, 1 (1998), 37\u201354.","journal-title":"Ratio"},{"key":"e_1_3_1_68_2","volume-title":"Proceedings of the 52nd Hawaii International Conference on System Sciences","author":"McNeese Nathan","year":"2019","unstructured":"Nathan McNeese, Mustafa Demir, Erin Chiou, Nancy Cooke, and Giovanni Yanikian. 2019. Understanding the role of trust in human-autonomy teaming. In Proceedings of the 52nd Hawaii International Conference on System Sciences."},{"key":"e_1_3_1_69_2","doi-asserted-by":"publisher","DOI":"10.1177\/0018720817743223"},{"key":"e_1_3_1_70_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.chb.2023.107874"},{"key":"e_1_3_1_71_2","doi-asserted-by":"crossref","unstructured":"Nathan J. McNeese Beau G. Schelble Lorenzo Barberis Canonico and Mustafa Demir. 2021. Who\/what is my teammate? team composition considerations in human-AI teaming. IEEE Transactions on Human-Machine Systems 51 4 (2021) 288\u2013299.","DOI":"10.1109\/THMS.2021.3086018"},{"key":"e_1_3_1_72_2","doi-asserted-by":"publisher","DOI":"10.1177\/0018720815621206"},{"key":"e_1_3_1_73_2","doi-asserted-by":"publisher","DOI":"10.1145\/1958824.1958945"},{"key":"e_1_3_1_74_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.chb.2021.106852"},{"key":"e_1_3_1_75_2","doi-asserted-by":"publisher","DOI":"10.1145\/3449123"},{"issue":"2","key":"e_1_3_1_76_2","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1109\/MIS.2018.2886670","article-title":"Autonomous intelligent agents for team training","volume":"34","author":"Myers Christopher","year":"2018","unstructured":"Christopher Myers, Jerry Ball, Nancy Cooke, Mary Freiman, Michelle Caisse, Stuart Rodgers, Mustafa Demir, and Nathan McNeese. 2018. Autonomous intelligent agents for team training. IEEE Intelligent Systems 34, 2 (2018), 3\u201314.","journal-title":"IEEE Intelligent Systems"},{"issue":"2","key":"e_1_3_1_77_2","doi-asserted-by":"crossref","first-page":"175","DOI":"10.1002\/job.742","article-title":"Predicting the form and direction of work role performance from the big 5 model of personality traits","volume":"33","author":"Neal Andrew","year":"2012","unstructured":"Andrew Neal, Gillian Yeo, Annette Koy, and Tania Xiao. 2012. Predicting the form and direction of work role performance from the big 5 model of personality traits. Journal of Organizational Behavior 33, 2 (2012), 175\u2013192.","journal-title":"Journal of Organizational Behavior"},{"key":"e_1_3_1_78_2","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3174223"},{"key":"e_1_3_1_79_2","doi-asserted-by":"crossref","first-page":"107762","DOI":"10.1016\/j.chb.2023.107762","article-title":"Human-autonomy Teaming: Need for a guiding team-based framework?","volume":"146","author":"O\u2019Neill Thomas A.","year":"2023","unstructured":"Thomas A. O\u2019Neill, Christopher Flathmann, Nathan J. McNeese, and Eduardo Salas. 2023. Human-autonomy Teaming: Need for a guiding team-based framework? Computers in Human Behavior 146 (2023), 107762.","journal-title":"Computers in Human Behavior"},{"key":"e_1_3_1_80_2","first-page":"001872082096086","article-title":"Human\u2013autonomy teaming: A review and analysis of the empirical literature","author":"O\u2019Neill Thomas","year":"2020","unstructured":"Thomas O\u2019Neill, Nathan McNeese, Amy Barron, and Beau Schelble. 2020. Human\u2013autonomy teaming: A review and analysis of the empirical literature. Human Factors (2020), 0018720820960865.","journal-title":"Human Factors"},{"key":"e_1_3_1_81_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jbef.2017.12.004"},{"key":"e_1_3_1_82_2","doi-asserted-by":"crossref","unstructured":"Love Patel Amy Elliott Erik Storlie Rajesh Kethireddy Kim Goodman and William Dickey. 2021. Ethical and legal challenges during the COVID-19 pandemic: are we thinking about rural hospitals?The Journal of Rural Health 37 1 (2021) 175.","DOI":"10.1111\/jrh.12447"},{"key":"e_1_3_1_83_2","doi-asserted-by":"publisher","DOI":"10.1002\/isaf.1422"},{"key":"e_1_3_1_84_2","doi-asserted-by":"crossref","unstructured":"Yao Rong Tobias Leemann Thai-trang Nguyen Lisa Fiedler Tina Seidel Gjergji Kasneci and Enkelejda Kasneci. 2022. Towards human-centered explainable AI: User studies for model explanations. arXiv preprint arXiv:2210.11584.","DOI":"10.1109\/TPAMI.2023.3331846"},{"issue":"1","key":"e_1_3_1_85_2","first-page":"127","article-title":"Building trust in artificial intelligence","volume":"72","author":"Rossi Francesca","year":"2018","unstructured":"Francesca Rossi. 2018. Building trust in artificial intelligence. Journal of International Affairs 72, 1 (2018), 127\u2013134.","journal-title":"Journal of International Affairs"},{"key":"e_1_3_1_86_2","doi-asserted-by":"publisher","DOI":"10.1145\/3490099.3511128"},{"key":"e_1_3_1_87_2","doi-asserted-by":"publisher","DOI":"10.1145\/3492832"},{"key":"e_1_3_1_88_2","first-page":"1","article-title":"Investigating the effects of perceived teammate artificiality on human performance and cognition","author":"Schelble Beau G.","year":"2022","unstructured":"Beau G. Schelble, Christopher Flathmann, Nathan J. McNeese, Thomas O\u2019Neill, Richard Pak, and Moses Namara. 2022. Investigating the effects of perceived teammate artificiality on human performance and cognition. International Journal of Human\u2013Computer Interaction (2022), 1\u201316.","journal-title":"International Journal of Human\u2013Computer Interaction"},{"key":"e_1_3_1_89_2","doi-asserted-by":"crossref","first-page":"001872082211169","DOI":"10.1177\/00187208221116952","article-title":"Towards ethical AI: Empirically investigating dimensions of AI ethics, trust repair, and performance in human-AI teaming","author":"Schelble Beau G.","year":"2022","unstructured":"Beau G. Schelble, Jeremy Lopez, Claire Textor, Rui Zhang, Nathan J. McNeese, Richard Pak, and Guo Freeman. 2022. Towards ethical AI: Empirically investigating dimensions of AI ethics, trust repair, and performance in human-AI teaming. Human Factors (2022), 00187208221116952.","journal-title":"Human Factors"},{"issue":"2","key":"e_1_3_1_90_2","doi-asserted-by":"crossref","first-page":"34","DOI":"10.1177\/1064804618818592","article-title":"Trust in artificial intelligence","volume":"27","author":"Sethumadhavan Arathi","year":"2019","unstructured":"Arathi Sethumadhavan. 2019. Trust in artificial intelligence. Ergonomics in Design 27, 2 (2019), 34\u201334.","journal-title":"Ergonomics in Design"},{"key":"e_1_3_1_91_2","doi-asserted-by":"publisher","DOI":"10.1080\/1369118X.2019.1568515"},{"key":"e_1_3_1_92_2","doi-asserted-by":"crossref","unstructured":"Nicholas Shea. 2023. Moving beyond content-specific computation in artificial neural networks. Mind & Language 38 1 (2023) 156\u2013177.","DOI":"10.1111\/mila.12387"},{"key":"e_1_3_1_93_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijhcs.2020.102551"},{"key":"e_1_3_1_94_2","doi-asserted-by":"publisher","DOI":"10.1145\/3419764"},{"issue":"2","key":"e_1_3_1_95_2","first-page":"47","article-title":"Building trust in artificial intelligence, machine learning, and robotics","volume":"31","author":"Siau Keng","year":"2018","unstructured":"Keng Siau and Weiyu Wang. 2018. Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal 31, 2 (2018), 47\u201353.","journal-title":"Cutter Business Technology Journal"},{"issue":"1","key":"e_1_3_1_96_2","first-page":"1","article-title":"Acceptance and fear of artificial intelligence: Associations with personality in a german and a chinese sample","volume":"2","author":"Sindermann Cornelia","year":"2021","unstructured":"Cornelia Sindermann, Haibo Yang, Jon D. Elhai, Shixin Yang, Ling Quan, Mei Li, and Christian Montag. 2021. Acceptance and fear of artificial intelligence: Associations with personality in a german and a chinese sample. Discover Psychology 2, 1 (2021), 1\u201312.","journal-title":"Discover Psychology"},{"key":"e_1_3_1_97_2","first-page":"1","article-title":"A survey of mental modeling techniques in human\u2013robot teaming","author":"Tabrez Aaquib","year":"2020","unstructured":"Aaquib Tabrez, Matthew B. Luebbers, and Bradley Hayes. 2020. A survey of mental modeling techniques in human\u2013robot teaming. Current Robotics Reports (2020), 1\u20139.","journal-title":"Current Robotics Reports"},{"key":"e_1_3_1_98_2","first-page":"155534342211139","article-title":"Exploring the relationship between ethics and trust in human\u2013artificial intelligence teaming: A mixed methods approach","author":"Textor Claire","year":"2022","unstructured":"Claire Textor, Rui Zhang, Jeremy Lopez, Beau G. Schelble, Nathan J. McNeese, Guo Freeman, Richard Pak, Chad Tossell, and Ewart J. de Visser. 2022. Exploring the relationship between ethics and trust in human\u2013artificial intelligence teaming: A mixed methods approach. Journal of Cognitive Engineering and Decision Making (2022), 15553434221113964.","journal-title":"Journal of Cognitive Engineering and Decision Making"},{"issue":"1","key":"e_1_3_1_99_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3480247","article-title":"Initial responses to false positives in ai-supported continuous interactions: A colonoscopy case study","volume":"12","author":"Berkel Niels Van","year":"2022","unstructured":"Niels Van Berkel, Jeremy Opie, Omer F. Ahmad, Laurence Lovat, Danail Stoyanov, and Ann Blandford. 2022. Initial responses to false positives in ai-supported continuous interactions: A colonoscopy case study. ACM Transactions on Interactive Intelligent Systems (TiiS) 12, 1 (2022), 1\u201318.","journal-title":"ACM Transactions on Interactive Intelligent Systems (TiiS)"},{"key":"e_1_3_1_100_2","doi-asserted-by":"publisher","DOI":"10.1111\/j.1540-5915.2008.00192.x"},{"key":"e_1_3_1_101_2","first-page":"775","volume-title":"Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","author":"Voiklis John","year":"2016","unstructured":"John Voiklis, Boyoung Kim, Corey Cusimano, and Bertram F Malle. 2016. Moral judgments of human vs. robot agents. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 775\u2013780."},{"key":"e_1_3_1_102_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3359313","article-title":"Human-AI collaboration in data science: Exploring data scientists\u2019 perceptions of automated AI","volume":"3","author":"Wang Dakuo","year":"2019","unstructured":"Dakuo Wang, Justin D. Weisz, Michael Muller, Parikshit Ram, Werner Geyer, Casey Dugan, Yla Tausczik, Horst Samulowitz, and Alexander Gray. 2019. Human-AI collaboration in data science: Exploring data scientists\u2019 perceptions of automated AI. Proceedings of the ACM on Human\u2013Computer Interaction 3, CSCW (2019), 1\u201324.","journal-title":"Proceedings of the ACM on Human\u2013Computer Interaction"},{"issue":"1","key":"e_1_3_1_103_2","doi-asserted-by":"crossref","first-page":"1266","DOI":"10.1109\/TVCG.2022.3209435","article-title":"Extending the nested model for user-centric XAI: A design study on GNN-based drug repurposing","volume":"29","author":"Wang Qianwen","year":"2022","unstructured":"Qianwen Wang, Kexin Huang, Payal Chandak, Marinka Zitnik, and Nils Gehlenborg. 2022. Extending the nested model for user-centric XAI: A design study on GNN-based drug repurposing. IEEE Transactions on Visualization and Computer Graphics 29, 1 (2022), 1266\u20131276.","journal-title":"IEEE Transactions on Visualization and Computer Graphics"},{"key":"e_1_3_1_104_2","article-title":"Effects of explanations in AI-assisted decision making: Principles and comparisons","author":"Wang Xinru","year":"2022","unstructured":"Xinru Wang and Ming Yin. 2022. Effects of explanations in AI-assisted decision making: Principles and comparisons. ACM Transactions on Interactive Intelligent Systems 12, 4 (2022), 1\u201336.","journal-title":"ACM Transactions on Interactive Intelligent Systems"},{"key":"e_1_3_1_105_2","first-page":"631","volume-title":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","volume":"63","author":"Warden Toby","year":"2019","unstructured":"Toby Warden, Pascale Carayon, Emilie M. Roth, Jessie Chen, William J. Clancey, Robert Hoffman, and Marc L. Steinberg. 2019. The national academies board on human system integration (BOHSI) panel: Explainable AI, system transparency, and human machine teaming. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 63. SAGE Publications Sage CA: Los Angeles, CA, 631\u2013635."},{"key":"e_1_3_1_106_2","doi-asserted-by":"publisher","DOI":"10.1016\/S1441-3582(02)70157-2"},{"key":"e_1_3_1_107_2","doi-asserted-by":"crossref","first-page":"7","DOI":"10.1145\/3308532.3329441","volume-title":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","author":"Weitz Katharina","year":"2019","unstructured":"Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, and Elisabeth Andr\u00e9. 2019. \u201cDo you trust me?\u201d Increasing user-trust by integrating virtual agents in explainable AI interaction design. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents. 7\u20139."},{"key":"e_1_3_1_108_2","doi-asserted-by":"publisher","DOI":"10.1145\/3282486"},{"key":"e_1_3_1_109_2","article-title":"Artificial intelligence: American attitudes and trends","author":"Zhang Baobao","year":"2019","unstructured":"Baobao Zhang and Allan Dafoe. 2019. Artificial intelligence: American attitudes and trends. Available at SSRN 3312874 (2019).","journal-title":"Available at SSRN 3312874"},{"key":"e_1_3_1_110_2","doi-asserted-by":"publisher","DOI":"10.1145\/3610072"},{"key":"e_1_3_1_111_2","doi-asserted-by":"publisher","DOI":"10.1145\/3432945"},{"key":"e_1_3_1_112_2","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372852"},{"key":"e_1_3_1_113_2","volume-title":"Proceedings of the IJCAI 2019 Workshop on Explainable AI (XAI)","author":"Zhou Jianlong","year":"2019","unstructured":"Jianlong Zhou and Fang Chen. 2019. Towards trustworthy human-AI teaming under uncertainty. In Proceedings of the IJCAI 2019 Workshop on Explainable AI (XAI)."},{"key":"e_1_3_1_114_2","doi-asserted-by":"publisher","DOI":"10.1145\/3232077"}],"container-title":["ACM Transactions on Interactive Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3635474","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3635474","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:36:21Z","timestamp":1750178181000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3635474"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,2,5]]},"references-count":113,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2024,3,31]]}},"alternative-id":["10.1145\/3635474"],"URL":"https:\/\/doi.org\/10.1145\/3635474","relation":{},"ISSN":["2160-6455","2160-6463"],"issn-type":[{"value":"2160-6455","type":"print"},{"value":"2160-6463","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,2,5]]},"assertion":[{"value":"2022-12-29","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-10-12","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-02-05","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}