{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T02:10:04Z","timestamp":1750299004130,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":63,"publisher":"ACM","license":[{"start":{"date-parts":[[2025,3,24]],"date-time":"2025-03-24T00:00:00Z","timestamp":1742774400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"DEVCOM Analysis Center","award":["W911NF-22-2-0001"],"award-info":[{"award-number":["W911NF-22-2-0001"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,3,24]]},"DOI":"10.1145\/3708359.3712121","type":"proceedings-article","created":{"date-parts":[[2025,3,19]],"date-time":"2025-03-19T12:50:34Z","timestamp":1742388634000},"page":"672-684","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Evaluating the Impact of AI-Generated Visual Explanations on Decision-Making for Image Matching"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0181-700X","authenticated-orcid":false,"given":"Albatool","family":"Wazzan","sequence":"first","affiliation":[{"name":"Dept of Computer and Information Science, Temple University, Philadelphia, Pennsylvania, USA,"}]},{"ORCID":"https:\/\/orcid.org\/0009-0009-1658-8391","authenticated-orcid":false,"given":"Marcus","family":"Wright","sequence":"additional","affiliation":[{"name":"Dept of Computer and Information Science, Temple University, Philadelphia, Pennsylvania, USA,"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2781-6619","authenticated-orcid":false,"given":"Stephen","family":"MacNeil","sequence":"additional","affiliation":[{"name":"Dept of Computer and Information Science, Temple University, Philadelphia, Pennsylvania, USA,"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6066-0946","authenticated-orcid":false,"given":"Richard","family":"Souvenir","sequence":"additional","affiliation":[{"name":"Dept of Computer and Information Science, Temple University, Philadelphia, Pennsylvania, USA,"}]}],"member":"320","published-online":{"date-parts":[[2025,3,24]]},"reference":[{"key":"e_1_3_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376615"},{"key":"e_1_3_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1145\/3377325.3377519"},{"key":"e_1_3_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1007\/11555261_104"},{"key":"e_1_3_3_2_5_2","unstructured":"G Beier. 1999. Locus of control when interacting with technology (Kontroll\u00fcberzeugungen im Umgang mit Technik). Report Psychologie 24 9 (1999) 684\u2013693."},{"key":"e_1_3_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/3643834.3660722"},{"key":"e_1_3_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV51458.2022.00160"},{"key":"e_1_3_3_2_8_2","volume-title":"International Conference on Learning Representations","author":"Borowski Judy","year":"2021","unstructured":"Judy Borowski, Roland\u00a0Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas\u00a0SA Wallis, Matthias Bethge, and Wieland Brendel. 2021. Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization. In International Conference on Learning Representations. ICLR, Austria, 9\u00a0pages."},{"key":"e_1_3_3_2_9_2","doi-asserted-by":"crossref","unstructured":"Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3 2 (2006) 77\u2013101.","DOI":"10.1191\/1478088706qp063oa"},{"key":"e_1_3_3_2_10_2","unstructured":"Wieland Brendel and Matthias Bethge. 2019. Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet. arxiv:https:\/\/arXiv.org\/abs\/1904.00760\u00a0[cs.CV] https:\/\/arxiv.org\/abs\/1904.00760"},{"key":"e_1_3_3_2_11_2","doi-asserted-by":"crossref","unstructured":"Nadia Burkart and Marco\u00a0F Huber. 2021. A survey on the explainability of supervised machine learning. Journal of Artificial Intelligence Research 70 (2021) 245\u2013317.","DOI":"10.1613\/jair.1.12228"},{"key":"e_1_3_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICHI.2015.26"},{"key":"e_1_3_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00990"},{"key":"e_1_3_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01008"},{"key":"e_1_3_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/2858036.2858498"},{"key":"e_1_3_3_2_16_2","unstructured":"Shelly Chaiken. 1999. Dual-process theories in social psychology. Guilford Press google schola 2 (1999) 206\u2013214."},{"key":"e_1_3_3_2_17_2","unstructured":"Chaofan Chen Oscar Li Daniel Tao Alina Barnett Cynthia Rudin and Jonathan\u00a0K Su. 2019. This looks like that: deep learning for interpretable image recognition. Advances in neural information processing systems 32 (2019) 12\u00a0pages."},{"key":"e_1_3_3_2_18_2","unstructured":"Eric Chu Deb Roy and Jacob Andreas. 2020. Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction. arxiv:https:\/\/arXiv.org\/abs\/2007.12248\u00a0[cs.LG] https:\/\/arxiv.org\/abs\/2007.12248"},{"key":"e_1_3_3_2_19_2","unstructured":"Julien Colin Thomas Fel Remi Cadene and Thomas Serre. 2023. What I Cannot Predict I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods. arxiv:https:\/\/arXiv.org\/abs\/2112.04417\u00a0[cs.CV] https:\/\/arxiv.org\/abs\/2112.04417"},{"key":"e_1_3_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-0-387-39940-9_488"},{"key":"e_1_3_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01002"},{"key":"e_1_3_3_2_22_2","unstructured":"Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. arXiv: Machine Learning none (2017) 9\u00a0pages. arXiv:https:\/\/arXiv.org\/abs\/1702.08608https:\/\/api.semanticscholar.org\/CorpusID:11319376"},{"key":"e_1_3_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445188"},{"key":"e_1_3_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.1145\/3613904.3642474"},{"key":"e_1_3_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-60117-1_33"},{"key":"e_1_3_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00304"},{"key":"e_1_3_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/iccv.2017.371"},{"key":"e_1_3_3_2_28_2","unstructured":"Sara Hooker Dumitru Erhan Pieter-Jan Kindermans and Been Kim. 2019. A Benchmark for Interpretability Methods in Deep Neural Networks. arxiv:https:\/\/arXiv.org\/abs\/1806.10758\u00a0[cs.LG] https:\/\/arxiv.org\/abs\/1806.10758"},{"key":"e_1_3_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV.2014.6835989"},{"key":"e_1_3_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1145\/3613905.3650812"},{"key":"e_1_3_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.4018\/978-1-60566-048-6.ch003"},{"key":"e_1_3_3_2_32_2","unstructured":"Sunnie S.\u00a0Y. Kim Nicole Meister Vikram\u00a0V. Ramaswamy Ruth Fong and Olga Russakovsky. 2022. HIVE: Evaluating the Human Interpretability of Visual Explanations. arxiv:https:\/\/arXiv.org\/abs\/2112.03184\u00a0[cs.CV] https:\/\/arxiv.org\/abs\/2112.03184"},{"key":"e_1_3_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3544548.3581001"},{"key":"e_1_3_3_2_34_2","unstructured":"Niklas K\u00fchl Christian Meske Maximilian Nitsche and Jodie Lobana. 2024. Investigating the Role of Explainability and AI Literacy in User Compliance. arxiv:https:\/\/arXiv.org\/abs\/2406.12660\u00a0[cs.AI] https:\/\/arxiv.org\/abs\/2406.12660"},{"key":"e_1_3_3_2_35_2","unstructured":"Isaac Lage Emily Chen Jeffrey He Menaka Narayanan Been Kim Sam Gershman and Finale Doshi-Velez. 2019. An Evaluation of the Human-Interpretability of Explanation. arxiv:https:\/\/arXiv.org\/abs\/1902.00006\u00a0[cs.LG] https:\/\/arxiv.org\/abs\/1902.00006"},{"key":"e_1_3_3_2_36_2","doi-asserted-by":"publisher","unstructured":"Vivian Lai Yiming Zhang Chacha Chen Q.\u00a0Vera Liao and Chenhao Tan. 2023. Selective Explanations: Leveraging Human Input to Align Explainable AI. Proc. ACM Hum.-Comput. Interact. 7 CSCW2 Article 357 (Oct. 2023) 35\u00a0pages. 10.1145\/3610206","DOI":"10.1145\/3610206"},{"key":"e_1_3_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376727"},{"key":"e_1_3_3_2_38_2","doi-asserted-by":"publisher","unstructured":"A.\u00a0E. Maxwell. 1977. Coefficients of Agreement Between Observers and Their Interpretation. The British Journal of Psychiatry 130 1 (1977) 79\u201383. 10.1192\/bjp.130.1.79","DOI":"10.1192\/bjp.130.1.79"},{"key":"e_1_3_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.01469"},{"key":"e_1_3_3_2_40_2","unstructured":"Giang Nguyen Daeyoung Kim and Anh Nguyen. 2022. The effectiveness of feature attribution methods and its correlation with automatic evaluation scores. arxiv:https:\/\/arXiv.org\/abs\/2105.14944\u00a0[cs.CV] https:\/\/arxiv.org\/abs\/2105.14944"},{"key":"e_1_3_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3502104"},{"key":"e_1_3_3_2_42_2","unstructured":"Vitali Petsiuk Abir Das and Kate Saenko. 2018. RISE: Randomized Input Sampling for Explanation of Black-box Models. arxiv:https:\/\/arXiv.org\/abs\/1806.07421\u00a0[cs.CV] https:\/\/arxiv.org\/abs\/1806.07421"},{"key":"e_1_3_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/FUZZ45933.2021.9494589"},{"key":"e_1_3_3_2_44_2","doi-asserted-by":"crossref","unstructured":"Samuele Poppi Marcella Cornia Lorenzo Baraldi and Rita Cucchiara. 2021. Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis. arxiv:https:\/\/arXiv.org\/abs\/2104.10252\u00a0[cs.CV] https:\/\/arxiv.org\/abs\/2104.10252","DOI":"10.1109\/CVPRW53098.2021.00260"},{"key":"e_1_3_3_2_45_2","doi-asserted-by":"crossref","unstructured":"Travis\u00a0R Ricks Kandi\u00a0Jo Turley-Ames and Jennifer Wiley. 2007. Effects of working memory capacity on mental set due to domain knowledge. Memory & cognition 35 (2007) 1456\u20131462.","DOI":"10.3758\/BF03193615"},{"key":"e_1_3_3_2_46_2","doi-asserted-by":"crossref","unstructured":"Astrid Schepman and Paul Rodway. 2023. The General Attitudes towards Artificial Intelligence Scale (GAAIS): Confirmatory validation and associations with personality corporate distrust and general trust. International Journal of Human\u2013Computer Interaction 39 13 (2023) 2724\u20132741.","DOI":"10.1080\/10447318.2022.2085400"},{"key":"e_1_3_3_2_47_2","doi-asserted-by":"publisher","unstructured":"Ramprasaath\u00a0R. Selvaraju Michael Cogswell Abhishek Das Ramakrishna Vedantam Devi Parikh and Dhruv Batra. 2019. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. International Journal of Computer Vision 128 2 (Oct. 2019) 336\u2013359. 10.1007\/s11263-019-01228-7","DOI":"10.1007\/s11263-019-01228-7"},{"key":"e_1_3_3_2_48_2","unstructured":"Hua Shen and Ting-Hao\u00a0Kenneth Huang. 2020. How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels. arxiv:https:\/\/arXiv.org\/abs\/2008.11721\u00a0[cs.HC] https:\/\/arxiv.org\/abs\/2008.11721"},{"key":"e_1_3_3_2_49_2","first-page":"11352","volume-title":"Advances in Neural Information Processing Systems","author":"Shitole Vivswan","year":"2021","unstructured":"Vivswan Shitole, Fuxin Li, Minsuk Kahng, Prasad Tadepalli, and Alan Fern. 2021. One Explanation is Not Enough: Structured Attention Graphs for Image Classification. In Advances in Neural Information Processing Systems , M.\u00a0Ranzato, A.\u00a0Beygelzimer, Y.\u00a0Dauphin, P.S. Liang, and J.\u00a0Wortman Vaughan (Eds.), Vol.\u00a034. Curran Associates, Inc., Virtual, 11352\u201311363. https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2021\/file\/5e751896e527c862bf67251a474b3819-Paper.pdf"},{"key":"e_1_3_3_2_50_2","doi-asserted-by":"publisher","DOI":"10.7551\/mitpress\/4711.001.0001"},{"key":"e_1_3_3_2_51_2","doi-asserted-by":"publisher","DOI":"10.1109\/MysuruCon52639.2021.9641572"},{"key":"e_1_3_3_2_52_2","doi-asserted-by":"publisher","DOI":"10.1109\/AIPR.2017.8457947"},{"key":"e_1_3_3_2_53_2","unstructured":"Mukund Sundararajan Ankur Taly and Qiqi Yan. 2017. Axiomatic Attribution for Deep Networks. arxiv:https:\/\/arXiv.org\/abs\/1703.01365\u00a0[cs.LG] https:\/\/arxiv.org\/abs\/1703.01365"},{"key":"e_1_3_3_2_54_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397481.3450662"},{"key":"e_1_3_3_2_55_2","doi-asserted-by":"publisher","DOI":"10.1145\/3320435.3320465"},{"key":"e_1_3_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.1145\/3640544.3645253"},{"key":"e_1_3_3_2_57_2","unstructured":"Laurens Van\u00a0der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research 9 11 (2008) 24\u00a0pages."},{"key":"e_1_3_3_2_58_2","doi-asserted-by":"crossref","unstructured":"Warren\u00a0J Von\u00a0Eschenbach. 2021. Transparency and the black box problem: Why we do not trust AI. Philosophy & Technology 34 4 (2021) 1607\u20131622.","DOI":"10.1007\/s13347-021-00477-0"},{"key":"e_1_3_3_2_59_2","doi-asserted-by":"crossref","unstructured":"Basil Wahn Laura Schmitz Frauke\u00a0Nora Gerster and Matthias Weiss. 2023. Offloading under cognitive load: Humans are willing to offload parts of an attentionally demanding task to an algorithm. Plos one 18 5 (2023) e0286102.","DOI":"10.1371\/journal.pone.0286102"},{"key":"e_1_3_3_2_60_2","doi-asserted-by":"crossref","unstructured":"Bo Wang Shuo Jin Qingsen Yan Haibo Xu Chuan Luo Lai Wei Wei Zhao Xuexue Hou Wenshuo Ma Zhengqing Xu et\u00a0al. 2021. AI-assisted CT imaging analysis for COVID-19 screening: Building and deploying a medical AI system. Applied soft computing 98 (2021) 106897.","DOI":"10.1016\/j.asoc.2020.106897"},{"key":"e_1_3_3_2_61_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397481.3450650"},{"key":"e_1_3_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/3652583.3658090"},{"key":"e_1_3_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372852"},{"key":"e_1_3_3_2_64_2","unstructured":"Roland\u00a0S Zimmermann Judy Borowski Robert Geirhos Matthias Bethge Thomas Wallis and Wieland Brendel. 2021. How well do feature visualizations support causal understanding of CNN activations? Advances in Neural Information Processing Systems 34 (2021) 11730\u201311744."}],"event":{"name":"IUI '25: 30th International Conference on Intelligent User Interfaces","sponsor":["SIGAI ACM Special Interest Group on Artificial Intelligence","SIGCHI ACM Special Interest Group on Computer-Human Interaction"],"location":"Cagliari Italy","acronym":"IUI '25"},"container-title":["Proceedings of the 30th International Conference on Intelligent User Interfaces"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3708359.3712121","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3708359.3712121","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:57:06Z","timestamp":1750298226000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3708359.3712121"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,24]]},"references-count":63,"alternative-id":["10.1145\/3708359.3712121","10.1145\/3708359"],"URL":"https:\/\/doi.org\/10.1145\/3708359.3712121","relation":{},"subject":[],"published":{"date-parts":[[2025,3,24]]},"assertion":[{"value":"2025-03-24","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}