{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T23:17:17Z","timestamp":1776122237641,"version":"3.50.1"},"reference-count":82,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2022,12,12]],"date-time":"2022-12-12T00:00:00Z","timestamp":1670803200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"DARPA Explainable Artificial Intelligence (XAI) Program","award":["N66001-17-2-4032"],"award-info":[{"award-number":["N66001-17-2-4032"]}]},{"name":"NSF","award":["1900767"],"award-info":[{"award-number":["1900767"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Interact. Intell. Syst."],"published-print":{"date-parts":[[2022,12,31]]},"abstract":"<jats:p>While EXplainable Artificial Intelligence (XAI) approaches aim to improve human-AI collaborative decision-making by improving model transparency and mental model formations, experiential factors associated with human users can cause challenges in ways system designers do not anticipate. In this article, we first showcase a user study on how anchoring bias can potentially affect mental model formations when users initially interact with an intelligent system and the role of explanations in addressing this bias. Using a video activity recognition tool in cooking domain, we asked participants to verify whether a set of kitchen policies are being followed, with each policy focusing on a weakness or a strength. We controlled the order of the policies and the presence of explanations to test our hypotheses. Our main finding shows that those who observed system strengths early on were more prone to automation bias and made significantly more errors due to positive first impressions of the system, while they built a more accurate mental model of the system competencies. However, those who encountered weaknesses earlier made significantly fewer errors, since they tended to rely more on themselves, while they also underestimated model competencies due to having a more negative first impression of the model. Motivated by these findings and similar existing work, we formalize and present a conceptual model of user\u2019s past experiences that examine the relations between user\u2019s backgrounds, experiences, and human factors in XAI systems based on usage time. Our work presents strong findings and implications, aiming to raise the awareness of AI designers toward biases associated with user impressions and backgrounds.<\/jats:p>","DOI":"10.1145\/3531066","type":"journal-article","created":{"date-parts":[[2022,4,29]],"date-time":"2022-04-29T11:37:31Z","timestamp":1651232251000},"page":"1-29","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":20,"title":["On the Importance of User Backgrounds and Impressions: Lessons Learned from Interactive AI Applications"],"prefix":"10.1145","volume":"12","author":[{"given":"Mahsan","family":"Nourani","sequence":"first","affiliation":[{"name":"University of Florida, Gainesville, Florida"}]},{"given":"Chiradeep","family":"Roy","sequence":"additional","affiliation":[{"name":"University of Texas in Dallas, Dallas, Texas"}]},{"given":"Jeremy E.","family":"Block","sequence":"additional","affiliation":[{"name":"University of Florida, Gainesville, Florida"}]},{"given":"Donald R.","family":"Honeycutt","sequence":"additional","affiliation":[{"name":"University of Florida, Gainesville, Florida"}]},{"given":"Tahrima","family":"Rahman","sequence":"additional","affiliation":[{"name":"University of Texas in Dallas, Dallas, Texas"}]},{"given":"Eric D.","family":"Ragan","sequence":"additional","affiliation":[{"name":"University of Florida, Gainesville, Florida"}]},{"given":"Vibhav","family":"Gogate","sequence":"additional","affiliation":[{"name":"University of Texas in Dallas, Dallas, Texas"}]}],"member":"320","published-online":{"date-parts":[[2022,12,12]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376615"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2870052"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1145\/3377325.3377519"},{"key":"e_1_3_2_5_2","volume-title":"People + AI Guidebook","author":"Research PAIR Team at Google","year":"2019","unstructured":"PAIR Team at Google Research. 2019. People + AI Guidebook. Retrieved from https:\/\/pair.withgoogle.com\/chapter\/mental-models\/."},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1037\/1528-3542.6.2.269"},{"key":"e_1_3_2_7_2","volume-title":"Thinking and Deciding","author":"Baron Jonathan","year":"2000","unstructured":"Jonathan Baron. 2000. Thinking and Deciding. Cambridge University Press."},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICHI.2015.26"},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/VAST.2017.8585665"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2021.103471"},{"key":"e_1_3_2_11_2","volume-title":"The Nature of Explanation","author":"Craik Kennith J. W.","year":"1943","unstructured":"Kennith J. W. Craik. 1943. The Nature of Explanation. Cambridge University Press."},{"key":"e_1_3_2_12_2","volume-title":"Proceedings of the 9th International AAAI Conference on Web and Social Media","author":"Rieis Julio Cesar Soares Dos","year":"2015","unstructured":"Julio Cesar Soares Dos Rieis, Fabr\u00edcio Benevenuto de Souza, Pedro Olmo S. Vaz de Melo, Raquel Oliveira Prates, Haewoon Kwak, and Jisun An. 2015. Breaking the news: First impressions matter on online news. In Proceedings of the 9th International AAAI Conference on Web and Social Media."},{"key":"e_1_3_2_13_2","unstructured":"Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. Retrieved from https:\/\/arXiv:1702.08608."},{"key":"e_1_3_2_14_2","first-page":"247","volume-title":"Advances in Experimental Social Psychology","author":"Dunning David","year":"2011","unstructured":"David Dunning. 2011. The Dunning\u2013Kruger effect: On being ignorant of one\u2019s own ignorance. In Advances in Experimental Social Psychology. Vol. 44. Elsevier, 247\u2013296."},{"key":"e_1_3_2_15_2","first-page":"63","article-title":"Confidence considered: Assessing the quality of decisions and performance","author":"Dunning David","year":"2012","unstructured":"David Dunning. 2012. Confidence considered: Assessing the quality of decisions and performance. Soc. Metacogn. (2012), 63\u201380.","journal-title":"Soc. Metacogn."},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445188"},{"key":"e_1_3_2_17_2","unstructured":"Upol Ehsan Samir Passi Q. Vera Liao Larry Chan I. Lee Michael Muller Mark O. Riedl et\u00a0al. 2021. The who in explainable AI: How AI background shapes perceptions of AI explanations. Retrieved from https:\/\/arXiv:2107.13509."},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNN.2011.2160459"},{"key":"e_1_3_2_19_2","first-page":"2","article-title":"Explainable artificial intelligence (XAI)","volume":"2","author":"Gunning David","year":"2017","unstructured":"David Gunning. 2017. Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA) 2 (2017), 2.","journal-title":"Defense Advanced Research Projects Agency (DARPA)"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/RO-MAN46459.2019.8956335"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.3758\/BF03197085"},{"key":"e_1_3_2_22_2","doi-asserted-by":"crossref","unstructured":"Peter Hase and Mohit Bansal. 2020. Evaluating explainable AI: Which algorithmic explanations help users predict model behavior? Retrieved from https:\/\/arxiv.org\/abs\/2005.01831v1.","DOI":"10.18653\/v1\/2020.acl-main.491"},{"key":"e_1_3_2_23_2","unstructured":"Robert R. Hoffman Shane T. Mueller Gary Klein and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. Retrieved from https:\/\/arXiv:1812.04608."},{"key":"e_1_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2018.2843369"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2019.2934659"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1145\/2856767.2856811"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1609\/hcomp.v8i1.7464"},{"key":"e_1_3_2_28_2","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.1012933107"},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCE.2011.6131162"},{"key":"e_1_3_2_30_2","article-title":"When algorithms err: Differential impact of early vs. late errors on users\u2019 reliance on algorithms","author":"Kim Antino","year":"2020","unstructured":"Antino Kim, Mochen Yang, and Jingjng Zhang. 2020. When algorithms err: Differential impact of early vs. late errors on users\u2019 reliance on algorithms. Late Errors on Users\u2019 Reliance on Algorithms (July 2020).","journal-title":"Late Errors on Users\u2019 Reliance on Algorithms (July 2020)"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1177\/0272989X16644563"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/IBICA.2012.41"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376590"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.3121\/cmr.2015.1289"},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1145\/3236386.3241340"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1177\/0018720811411912"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1518\/001872008X288574"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1086\/219886"},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2018.07.007"},{"key":"e_1_3_2_40_2","unstructured":"Tim Miller Piers Howe and Liz Sonenberg. 2017. Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. Retrieved from https:\/\/arXiv:1712.00547."},{"key":"e_1_3_2_41_2","article-title":"A survey of evaluation methods and measures for interpretable machine learning","author":"Mohseni Sina","year":"2018","unstructured":"Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2018. A survey of evaluation methods and measures for interpretable machine learning. ACM Trans. Interact. Intell. Syst. (2018).","journal-title":"ACM Trans. Interact. Intell. Syst."},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","DOI":"10.1037\/0033-295X.115.2.502"},{"key":"e_1_3_2_43_2","first-page":"7","volume-title":"Some Observations on Mental Models (1st ed.)","author":"Norman Donald A.","year":"1983","unstructured":"Donald A. Norman. 1983. Some Observations on Mental Models (1st ed.). Lawrence Erlbaum Associates, 7\u201314. Retrieved from https:\/\/ar264sweeney.files.wordpress.com\/2015\/11\/norman_mentalmodels.pdf."},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3334480.3382967"},{"key":"e_1_3_2_45_2","doi-asserted-by":"publisher","DOI":"10.1609\/hcomp.v8i1.7469"},{"key":"e_1_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397481.3450639"},{"key":"e_1_3_2_47_2","unstructured":"Mahsan Nourani Chiradeep Roy Tahrima Rahman Eric D. Ragan Nicholas Ruozzi and Vibhav Gogate. 2020. Don\u2019t explain without verifying veracity: An evaluation of explainable AI with video activity recognition. Retrieved from https:\/\/arXiv:2005.02335."},{"key":"e_1_3_2_48_2","doi-asserted-by":"publisher","DOI":"10.23915\/distill.00010"},{"key":"e_1_3_2_49_2","unstructured":"P. Deepak V. Sanil and M. Jose Joemon. 2021. On fairness and interpretability. Retrieved from http:\/\/arxiv.org\/abs\/2106.13271."},{"key":"e_1_3_2_50_2","doi-asserted-by":"publisher","DOI":"10.1145\/3319502.3374786"},{"key":"e_1_3_2_51_2","doi-asserted-by":"publisher","DOI":"10.1109\/RO-MAN46459.2019.8956463"},{"key":"e_1_3_2_52_2","unstructured":"Gregory Plumb Denali Molitor and Ameet Talwalkar. 2018. Supervised local modeling for interpretability. Retrieved from http:\/\/arxiv.org\/abs\/1807.02910v1."},{"key":"e_1_3_2_53_2","article-title":"Manipulating and measuring model interpretability","author":"Poursabzi-Sangdeh Forough","year":"2018","unstructured":"Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. 2018. Manipulating and measuring model interpretability. Retrieved from https:\/\/arXiv:1802.07810 (to appear in the Proceedings of ACM CHI 2021).","journal-title":"Retrieved from https:\/\/arXiv:1802.07810 (to appear in the Proceedings of ACM CHI 2021)"},{"key":"e_1_3_2_54_2","doi-asserted-by":"publisher","DOI":"10.1162\/003355399555945"},{"key":"e_1_3_2_55_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-662-44851-9_40"},{"key":"e_1_3_2_56_2","first-page":"403","article-title":"Toward intelligent decision support systems: An artificially intelligent statistician","author":"Remus William E.","year":"1986","unstructured":"William E. Remus and Jeffrey E. Kottemann. 1986. Toward intelligent decision support systems: An artificially intelligent statistician. MIS Quarterly (1986), 403\u2013418.","journal-title":"MIS Quarterly"},{"key":"e_1_3_2_57_2","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_2_58_2","first-page":"9","article-title":"Anchors: High precision model-agnostic explanations","author":"Ribeiro Marco Tulio","year":"2018","unstructured":"Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High precision model-agnostic explanations. In Proceedings of the Association for the Advancement of Artificial Intelligence (www.aaai.org). 9.","journal-title":"Proceedings of the Association for the Advancement of Artificial Intelligence (www.aaai.org)"},{"key":"e_1_3_2_59_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-11752-2_15"},{"key":"e_1_3_2_60_2","volume-title":"Proceedings of the IUI Workshops","author":"Roy Chiradeep","year":"2019","unstructured":"Chiradeep Roy, Mahesh Shanbhag, Mahsan Nourani, Tahrima Rahman, Samia Kabir, Vibhav Gogate, Nicholas Ruozzi, and Eric D. Ragan. 2019. Explainable activity recognition in videos. In Proceedings of the IUI Workshops."},{"key":"e_1_3_2_61_2","volume-title":"Cognitive Bias Examples","author":"Ruhl Charlotte","year":"2021","unstructured":"Charlotte Ruhl. 2021. Cognitive Bias Examples. Retrieved from www.simplypsychology.org\/cognitive-bias.html."},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.3758\/BF03202637"},{"key":"e_1_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.1145\/3301275.3302308"},{"key":"e_1_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.74"},{"key":"e_1_3_2_65_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ress.2018.04.016"},{"key":"e_1_3_2_66_2","doi-asserted-by":"publisher","DOI":"10.1145\/3419764"},{"key":"e_1_3_2_67_2","doi-asserted-by":"publisher","DOI":"10.1002\/bdm.486"},{"key":"e_1_3_2_68_2","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376624"},{"key":"e_1_3_2_69_2","doi-asserted-by":"publisher","DOI":"10.1145\/1378773.1378781"},{"key":"e_1_3_2_70_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"e_1_3_2_71_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397481.3450662"},{"key":"e_1_3_2_72_2","doi-asserted-by":"publisher","DOI":"10.1109\/MIS.2008.18"},{"key":"e_1_3_2_73_2","doi-asserted-by":"publisher","DOI":"10.2307\/3033716"},{"key":"e_1_3_2_74_2","doi-asserted-by":"publisher","DOI":"10.1145\/3450613.3456817"},{"key":"e_1_3_2_75_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10462-017-9545-7"},{"key":"e_1_3_2_76_2","doi-asserted-by":"publisher","DOI":"10.3389\/frobt.2021.640647"},{"key":"e_1_3_2_77_2","article-title":"A human-centered agenda for intelligible machine learning","author":"Vaughan Jennifer Wortman","year":"2020","unstructured":"Jennifer Wortman Vaughan and Hanna Wallach. 2020. A human-centered agenda for intelligible machine learning. Machines We Trust: Getting Along with Artificial Intelligence. MIT Press.","journal-title":"Machines We Trust: Getting Along with Artificial Intelligence"},{"key":"e_1_3_2_78_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-29384-0_34"},{"key":"e_1_3_2_79_2","article-title":"Clinical applications of machine learning algorithms: Beyond the black box","volume":"364","author":"Watson David S.","year":"2019","unstructured":"David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes, and Luciano Floridi. 2019. Clinical applications of machine learning algorithms: Beyond the black box. Bmj 364 (2019).","journal-title":"Bmj"},{"key":"e_1_3_2_80_2","doi-asserted-by":"publisher","DOI":"10.1145\/3282486"},{"key":"e_1_3_2_81_2","doi-asserted-by":"publisher","DOI":"10.1109\/ROMAN.2018.8525669"},{"key":"e_1_3_2_82_2","doi-asserted-by":"publisher","DOI":"10.1007\/s12293-009-0018-7"},{"key":"e_1_3_2_83_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41746-019-0087-z"}],"container-title":["ACM Transactions on Interactive Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3531066","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3531066","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3531066","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:00:27Z","timestamp":1750186827000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3531066"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,12,12]]},"references-count":82,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2022,12,31]]}},"alternative-id":["10.1145\/3531066"],"URL":"https:\/\/doi.org\/10.1145\/3531066","relation":{},"ISSN":["2160-6455","2160-6463"],"issn-type":[{"value":"2160-6455","type":"print"},{"value":"2160-6463","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,12,12]]},"assertion":[{"value":"2021-08-11","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-04-11","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-12-12","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}