{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,20]],"date-time":"2026-02-20T04:16:08Z","timestamp":1771560968785,"version":"3.50.1"},"reference-count":104,"publisher":"Association for Computing Machinery (ACM)","issue":"CSCW2","license":[{"start":{"date-parts":[[2023,10,4]],"date-time":"2023-10-04T00:00:00Z","timestamp":1696377600000},"content-version":"vor","delay-in-days":6,"URL":"http:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000001","name":"NSF","doi-asserted-by":"publisher","award":["2026513"],"award-info":[{"award-number":["2026513"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Hum.-Comput. Interact."],"published-print":{"date-parts":[[2023,9,28]]},"abstract":"<jats:p>The local explanation provides heatmaps on images to explain how Convolutional Neural Networks (CNNs) derive their output. Due to its visual straightforwardness, the method has been one of the most popular explainable AI (XAI) methods for diagnosing CNNs. Through our formative study (S1), however, we captured ML engineers' ambivalent perspective about the local explanation as a valuable and indispensable envision in building CNNs versus the process that exhausts them due to the heuristic nature of detecting vulnerability. Moreover, steering the CNNs based on the vulnerability learned from the diagnosis seemed highly challenging. To mitigate the gap, we designed DeepFuse, the first interactive design that realizes the direct feedback loop between a user and CNNs in diagnosing and revising CNN's vulnerability using local explanations. DeepFuse helps CNN engineers to systemically search \"unreasonable\" local explanations and annotate the new boundaries for those identified as unreasonable in a labor-efficient manner. Next, it steers the model based on the given annotation such that the model doesn't introduce similar mistakes. We conducted a two-day study (S2) with 12 experienced CNN engineers. Using DeepFuse, participants made a more accurate and \"reasonable\" model than the current state-of-the-art. Also, participants found the way DeepFuse guides case-based reasoning can practically improve their current practice. We provide implications for design that explain how future HCI-driven design can move our practice forward to make XAI-driven insights more actionable.<\/jats:p>","DOI":"10.1145\/3610187","type":"journal-article","created":{"date-parts":[[2023,10,4]],"date-time":"2023-10-04T15:54:10Z","timestamp":1696434850000},"page":"1-32","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":7,"title":["Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations"],"prefix":"10.1145","volume":"7","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7298-0333","authenticated-orcid":false,"given":"Tong Steven","family":"Sun","sequence":"first","affiliation":[{"name":"George Mason University, Fairfax, VA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8045-2001","authenticated-orcid":false,"given":"Yuyang","family":"Gao","sequence":"additional","affiliation":[{"name":"The Home Depot, Inc., Atlanta, GA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-4548-5437","authenticated-orcid":false,"given":"Shubham","family":"Khaladkar","sequence":"additional","affiliation":[{"name":"George Mason University, Fairfax, VA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2817-6991","authenticated-orcid":false,"given":"Sijia","family":"Liu","sequence":"additional","affiliation":[{"name":"Michigan State University, East Lansing, MI, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2648-9989","authenticated-orcid":false,"given":"Liang","family":"Zhao","sequence":"additional","affiliation":[{"name":"Emory University, Atlanta, GA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2681-2774","authenticated-orcid":false,"given":"Young-Ho","family":"Kim","sequence":"additional","affiliation":[{"name":"NAVER AI Lab, Seongnam, South Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6050-5404","authenticated-orcid":false,"given":"Sungsoo Ray","family":"Hong","sequence":"additional","affiliation":[{"name":"George Mason University, Fairfax, VA, USA"}]}],"member":"320","published-online":{"date-parts":[[2023,10,4]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14","author":"Abdul Ashraf","unstructured":"Ashraf Abdul, Christian von der Weth, Mohan Kankanhalli, and Brian Y Lim. 2020. COGAM: measuring and moderating cognitive load in machine learning model explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14."},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1037\/0021-9010.69.2.334"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/2702123.2702509"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00778-021-00671-8"},{"key":"e_1_2_1_5_1","volume-title":"Do convolutional neural networks learn class hierarchy? IEEE transactions on visualization and computer graphics","author":"Bilal Alsallakh","year":"2017","unstructured":"Alsallakh Bilal, Amin Jourabloo, Mao Ye, Xiaoming Liu, and Liu Ren. 2017. Do convolutional neural networks learn class hierarchy? IEEE transactions on visualization and computer graphics, Vol. 24, 1 (2017), 152--162."},{"key":"e_1_2_1_6_1","volume-title":"Algorithmic Bias and Fairness in Case-Based Reasoning. In International Conference on Case-Based Reasoning. Springer, 48--62","author":"Blanzeisky William","year":"2022","unstructured":"William Blanzeisky, Barry Smyth, and P\u00e1draig Cunningham. 2022. Algorithmic Bias and Fairness in Case-Based Reasoning. In International Conference on Case-Based Reasoning. Springer, 48--62."},{"key":"e_1_2_1_7_1","volume-title":"Benchmarking and survey of explanation methods for black box models. arXiv preprint arXiv:2102.13076","author":"Bodria Francesco","year":"2021","unstructured":"Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, and Salvatore Rinzivillo. 2021. Benchmarking and survey of explanation methods for black box models. arXiv preprint arXiv:2102.13076 (2021)."},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.5555\/2817912.2817913"},{"key":"e_1_2_1_9_1","unstructured":"John Brooke et al. 1996. SUS-A quick and dirty usability scale. Usability evaluation in industry Vol. 189 194 (1996) 4--7."},{"key":"e_1_2_1_10_1","volume-title":"Why AI Needs Human Input (And Always Will). https:\/\/www.forbes.com\/sites\/forbestechcouncil\/2019\/10\/30\/why-ai-needs-human-input-and-always-will\/ Retrieved","author":"Brydon Antony","year":"2022","unstructured":"Antony Brydon. 2019. Why AI Needs Human Input (And Always Will). https:\/\/www.forbes.com\/sites\/forbestechcouncil\/2019\/10\/30\/why-ai-needs-human-input-and-always-will\/ Retrieved September 10, 2022 from"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/2858036.2858498"},{"key":"e_1_2_1_12_1","volume-title":"Grad-cam: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE winter conference on applications of computer vision (WACV)","author":"Chattopadhay Aditya","year":"2018","unstructured":"Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, and Vineeth N Balasubramanian. 2018. Grad-cam: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE winter conference on applications of computer vision (WACV). IEEE, 839--847."},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/2675133.2675214"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300460"},{"key":"e_1_2_1_15_1","volume-title":"The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810","author":"Chouldechova Alexandra","year":"2018","unstructured":"Alexandra Chouldechova and Aaron Roth. 2018. The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810 (2018)."},{"key":"e_1_2_1_16_1","volume-title":"Proceedings of the ACM on Human-Computer Interaction CSCW","author":"Chung Chaeyeon","year":"2021","unstructured":"Chaeyeon Chung, Jung Soo Lee, Kyungmin Park, Junsoo Lee, Jaegul Choo, and Sungsoo Ray Hong. 2021. Understanding Human-side Impact of Sequencing Images in Batch Labeling for Subjective Tasks. Proceedings of the ACM on Human-Computer Interaction CSCW (2021)."},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359164"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/PacificVis48177.2020.7090"},{"key":"e_1_2_1_19_1","volume-title":"Using Case-Based Reasoning for Capturing Expert Knowledge on Explanation Methods. In International Conference on Case-Based Reasoning. Springer, 3--17","author":"Darias Jesus M","year":"2022","unstructured":"Jesus M Darias, Marta Caro-Mart'inez, Bel\u00e9n D'iaz-Agudo, and Juan A Recio-Garcia. 2022. Using Case-Based Reasoning for Capturing Expert Knowledge on Explanation Methods. In International Conference on Case-Based Reasoning. Springer, 3--17."},{"key":"e_1_2_1_20_1","volume-title":"Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace.","author":"DeYoung Jay","year":"2019","unstructured":"Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. 2019. ERASER: A benchmark to evaluate rationalized NLP models. arXiv preprint arXiv:1911.03429 (2019)."},{"key":"e_1_2_1_21_1","volume-title":"The Hawthorne effect: A fresh examination. Educational studies","author":"Diaper Gordon","year":"1990","unstructured":"Gordon Diaper. 1990. The Hawthorne effect: A fresh examination. Educational studies, Vol. 16, 3 (1990), 261--267."},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3185517"},{"key":"e_1_2_1_23_1","unstructured":"Shannon Leigh Eggers and Char Sample. 2020. Vulnerabilities in Artificial Intelligence and Machine Learning Applications and Data. Technical Report. Idaho National Lab.(INL) Idaho Falls ID (United States)."},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/604045.604056"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.01096"},{"key":"e_1_2_1_26_1","volume-title":"Dazhou Yu, and Liang Zhao.","author":"Gao Yuyang","year":"2022","unstructured":"Yuyang Gao, Siyi Gu, Junji Jiang, Sungsoo Ray Hong, Dazhou Yu, and Liang Zhao. 2022a. Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning. arXiv preprint arXiv:2212.03954 (2022)."},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM51629.2021.00023"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/3534678.3539419"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/3555590"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/3301275.3302324"},{"key":"e_1_2_1_31_1","volume-title":"A 20-Year Community Roadmap for Artificial Intelligence Research in the US. arXiv preprint arXiv:1908.02624","author":"Gil Yolanda","year":"2019","unstructured":"Yolanda Gil and Bart Selman. 2019. A 20-Year Community Roadmap for Artificial Intelligence Research in the US. arXiv preprint arXiv:1908.02624 (2019)."},{"key":"e_1_2_1_32_1","volume-title":"Why do you think that? exploring faithful sentence-level rationales without supervision. arXiv preprint arXiv:2010.03384","author":"Glockner Max","year":"2020","unstructured":"Max Glockner, Ivan Habernal, and Iryna Gurevych. 2020. Why do you think that? exploring faithful sentence-level rationales without supervision. arXiv preprint arXiv:2010.03384 (2020)."},{"key":"e_1_2_1_33_1","volume-title":"SHAI 2023: Workshop on Designing for Safety in Human-AI Interactions. In Companion Proceedings of the 28th International Conference on Intelligent User Interfaces. 199--201","author":"Goyal Nitesh","year":"2023","unstructured":"Nitesh Goyal, Sungsoo Ray Hong, Regan L Mandryk, Toby Jia-Jun Li, Kurt Luther, and Dakuo Wang. 2023. SHAI 2023: Workshop on Designing for Safety in Human-AI Interactions. In Companion Proceedings of the 28th International Conference on Intelligent User Interfaces. 199--201."},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359152"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/1656274.1656278"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3173582"},{"key":"e_1_2_1_37_1","volume-title":"Ways of Knowing in HCI","author":"Hayes Gillian R","unstructured":"Gillian R Hayes. 2014. Knowing by doing: action research as an approach to HCI. In Ways of Knowing in HCI. Springer, 49--68."},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.322"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01219-9_47"},{"key":"e_1_2_1_40_1","volume-title":"Gender and Racial Bias in Visual Question Answering Datasets. arXiv preprint arXiv:2205.08148","author":"Hirota Yusuke","year":"2022","unstructured":"Yusuke Hirota, Yuta Nakashima, and Noa Garcia. 2022. Gender and Racial Bias in Visual Question Answering Datasets. arXiv preprint arXiv:2205.08148 (2022)."},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376177"},{"key":"e_1_2_1_42_1","volume-title":"Towards Evaluating Exploratory Model Building Process with AutoML Systems. arXiv preprint arXiv:2009.00449","author":"Hong Sungsoo Ray","year":"2020","unstructured":"Sungsoo Ray Hong, Sonia Castelo, Vito D'Orazio, Christopher Benthune, Aecio Santos, Scott Langevin, David Jonker, Enrico Bertini, and Juliana Freire. 2020a. Towards Evaluating Exploratory Model Building Process with AutoML Systems. arXiv preprint arXiv:2009.00449 (2020)."},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/3392878"},{"key":"e_1_2_1_44_1","volume-title":"CHI 2019 Workshop, Emerging Perspectives in Human-Centered Machine Learning. ACM.","author":"Hong Sungsoo Ray","year":"2019","unstructured":"Sungsoo Ray Hong, Jorge Piazentin Ono, Juliana Freire, and Enrico Bertini. 2019. Disseminating Machine Learning to domain experts: Understanding challenges and opportunities in supporting a model building process. In CHI 2019 Workshop, Emerging Perspectives in Human-Centered Machine Learning. ACM."},{"key":"e_1_2_1_45_1","volume-title":"Ways of Knowing in HCI","author":"Hudson Scott E","unstructured":"Scott E Hudson and Jennifer Mankoff. 2014. Concepts, values, and methods for technical human-computer interaction research. In Ways of Knowing in HCI. Springer, 69--93."},{"key":"e_1_2_1_46_1","first-page":"26726","article-title":"Improving deep learning interpretability by saliency guided training","volume":"34","author":"Ismail Aya Abdelsalam","year":"2021","unstructured":"Aya Abdelsalam Ismail, Hector Corrada Bravo, and Soheil Feizi. 2021. Improving deep learning interpretability by saliency guided training. Advances in Neural Information Processing Systems, Vol. 34 (2021), 26726--26739.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_47_1","volume-title":"ActiVis: Visual exploration of industry-scale deep neural network models","author":"Kahng Minsuk","year":"2017","unstructured":"Minsuk Kahng, Pierre Y Andrews, Aditya Kalro, and Duen Horng Chau. 2017. ActiVis: Visual exploration of industry-scale deep neural network models. IEEE transactions on visualization and computer graphics, Vol. 24, 1 (2017), 88--97."},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939502.2939503"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.3390\/s21072514"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i2.16269"},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.1611835114"},{"key":"e_1_2_1_52_1","volume-title":"Proceedings of 16th European conference on computer-supported cooperative work-exploratory papers. European Society for Socially Embedded Technologies (EUSSET).","author":"Koch Michael","year":"2018","unstructured":"Michael Koch, Kai von Luck, Jan Schwarzer, and Susanne Draheim. 2018. The novelty effect in large display deployments--Experiences and lessons-learned for evaluating prototypes. In Proceedings of 16th European conference on computer-supported cooperative work-exploratory papers. European Society for Socially Embedded Technologies (EUSSET)."},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1109\/VAST.2017.8585720"},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/2858036.2858529"},{"key":"e_1_2_1_55_1","volume-title":"Sociological practice: Linking theory and social research","author":"Layder Derek","unstructured":"Derek Layder. 1998. Sociological practice: Linking theory and social research. Sage."},{"key":"e_1_2_1_56_1","first-page":"25123","article-title":"Learning debiased representation via disentangled feature augmentation","volume":"34","author":"Lee Jungsoo","year":"2021","unstructured":"Jungsoo Lee, Eungyeup Kim, Juyoung Lee, Jihyeon Lee, and Jaegul Choo. 2021a. Learning debiased representation via disentangled feature augmentation. Advances in Neural Information Processing Systems, Vol. 34 (2021), 25123--25133.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00545"},{"key":"e_1_2_1_58_1","volume-title":"A survey of convolutional neural networks: analysis, applications, and prospects","author":"Li Zewen","year":"2021","unstructured":"Zewen Li, Fan Liu, Wenjie Yang, Shouheng Peng, and Jun Zhou. 2021. A survey of convolutional neural networks: analysis, applications, and prospects. IEEE transactions on neural networks and learning systems (2021)."},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP.2017.8296695"},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1145\/3377815.3381377"},{"key":"e_1_2_1_62_1","volume-title":"From local explanations to global understanding with explainable AI for trees. Nature machine intelligence","author":"Lundberg Scott M","year":"2020","unstructured":"Scott M Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. 2020. From local explanations to global understanding with explainable AI for trees. Nature machine intelligence, Vol. 2, 1 (2020), 56--67."},{"key":"e_1_2_1_63_1","unstructured":"Yao Ming. 2017. A survey on visualization for explainable classifiers. (2017)."},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1109\/VAST.2017.8585721"},{"key":"e_1_2_1_65_1","volume-title":"Rulematrix: Visualizing and understanding classifiers with rules","author":"Ming Yao","year":"2018","unstructured":"Yao Ming, Huamin Qu, and Enrico Bertini. 2018. Rulematrix: Visualizing and understanding classifiers with rules. IEEE transactions on visualization and computer graphics, Vol. 25, 1 (2018), 342--352."},{"key":"e_1_2_1_66_1","volume-title":"Embedding human knowledge into deep neural network via attention map. arXiv preprint arXiv:1905.03540","author":"Mitsuhara Masahiro","year":"2019","unstructured":"Masahiro Mitsuhara, Hiroshi Fukui, Yusuke Sakashita, Takanori Ogata, Tsubasa Hirakawa, Takayoshi Yamashita, and Hironobu Fujiyoshi. 2019. Embedding human knowledge into deep neural network via attention map. arXiv preprint arXiv:1905.03540 (2019)."},{"key":"e_1_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1145\/3387166"},{"key":"e_1_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN48605.2020.9206626"},{"key":"e_1_2_1_69_1","unstructured":"Don Norman. 2013. The design of everyday things: Revised and expanded edition. Basic books."},{"key":"e_1_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2019.01.012"},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2021.3114858"},{"key":"e_1_2_1_72_1","volume-title":"Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, Vol. 32 (2019)."},{"key":"e_1_2_1_73_1","unstructured":"Nicola Pezzotti. 2019. Dimensionality-Reduction Algorithms for Progressive Visual Analytics. (2019)."},{"key":"e_1_2_1_74_1","volume-title":"Squares: Supporting interactive performance analysis for multiclass classifiers","author":"Ren Donghao","year":"2016","unstructured":"Donghao Ren, Saleema Amershi, Bongshin Lee, Jina Suh, and Jason D Williams. 2016. Squares: Supporting interactive performance analysis for multiclass classifiers. IEEE transactions on visualization and computer graphics, Vol. 23, 1 (2016), 61--70."},{"key":"e_1_2_1_75_1","volume-title":"International Conference on Machine Learning. PMLR, 8346--8356","author":"Sagawa Shiori","year":"2020","unstructured":"Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. 2020. An investigation of why overparameterization exacerbates spurious correlations. In International Conference on Machine Learning. PMLR, 8346--8356."},{"key":"e_1_2_1_76_1","volume-title":"The coding manual for qualitative researchers","author":"Johnny Salda","unstructured":"Johnny Salda na. 2015. The coding manual for qualitative researchers. Sage."},{"key":"e_1_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.1145\/3328519.3329134"},{"key":"e_1_2_1_78_1","unstructured":"Irving Seidman. 2006. Interviewing as qualitative research: A guide for researchers in education and the social sciences. Teachers college press."},{"key":"e_1_2_1_79_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.74"},{"key":"e_1_2_1_80_1","doi-asserted-by":"publisher","DOI":"10.1007\/s13222-020-00351-x"},{"key":"e_1_2_1_81_1","volume-title":"Adobe Photoshop CS3: Complete concepts and techniques","author":"Shelly Gary B","unstructured":"Gary B Shelly, Thomas J Cashman, and Joy L Starks. 2008. Adobe Photoshop CS3: Complete concepts and techniques. Course Technology Press."},{"key":"e_1_2_1_82_1","volume-title":"The sciences of the artificial","author":"Simon Herbert A","year":"1969","unstructured":"Herbert A Simon. 1981. The sciences of the artificial, 1969. Massachusetts Institute of Technology (1981)."},{"key":"e_1_2_1_83_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01108"},{"key":"e_1_2_1_84_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.415"},{"key":"e_1_2_1_85_1","doi-asserted-by":"publisher","DOI":"10.1145\/3531146.3534639"},{"key":"e_1_2_1_86_1","volume-title":"explAIner: A visual analytics framework for interactive and explainable machine learning","author":"Spinner Thilo","year":"2019","unstructured":"Thilo Spinner, Udo Schlegel, Hanna Sch\"afer, and Mennatallah El-Assady. 2019. explAIner: A visual analytics framework for interactive and explainable machine learning. IEEE transactions on visualization and computer graphics, Vol. 26, 1 (2019), 1064--1074."},{"key":"e_1_2_1_87_1","doi-asserted-by":"publisher","DOI":"10.1186\/s41074-019-0053-3"},{"key":"e_1_2_1_88_1","doi-asserted-by":"publisher","DOI":"10.1145\/1518701.1518895"},{"key":"e_1_2_1_89_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359313"},{"key":"e_1_2_1_90_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2016.2589879"},{"key":"e_1_2_1_91_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00541"},{"key":"e_1_2_1_92_1","volume-title":"Dodrio: Exploring transformer models with interactive visualization. arXiv preprint arXiv:2103.14625","author":"Wang Zijie J","year":"2021","unstructured":"Zijie J Wang, Robert Turko, and Duen Horng Chau. 2021. Dodrio: Exploring transformer models with interactive visualization. arXiv preprint arXiv:2103.14625 (2021)."},{"key":"e_1_2_1_93_1","doi-asserted-by":"publisher","DOI":"10.1145\/3334480.3382899"},{"key":"e_1_2_1_94_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-14923-8_5"},{"key":"e_1_2_1_95_1","volume-title":"The what-if tool: Interactive probing of machine learning models","author":"Wexler James","year":"2019","unstructured":"James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Vi\u00e9gas, and Jimbo Wilson. 2019. The what-if tool: Interactive probing of machine learning models. IEEE transactions on visualization and computer graphics, Vol. 26, 1 (2019), 56--65."},{"key":"e_1_2_1_96_1","volume-title":"Visualizing dataflow graphs of deep learning models in tensorflow","author":"Wongsuphasawat Kanit","year":"2017","unstructured":"Kanit Wongsuphasawat, Daniel Smilkov, James Wexler, Jimbo Wilson, Dandelion Mane, Doug Fritz, Dilip Krishnan, Fernanda B Vi\u00e9gas, and Martin Wattenberg. 2017. Visualizing dataflow graphs of deep learning models in tensorflow. IEEE transactions on visualization and computer graphics, Vol. 24, 1 (2017), 1--12."},{"key":"e_1_2_1_97_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3502075"},{"key":"e_1_2_1_98_1","first-page":"5505","article-title":"DVERGE: diversifying vulnerabilities for enhanced robust generation of ensembles","volume":"33","author":"Yang Huanrui","year":"2020","unstructured":"Huanrui Yang, Jingyang Zhang, Hongliang Dong, Nathan Inkawhich, Andrew Gardner, Andrew Touchet, Wesley Wilkes, Heath Berry, and Hai Li. 2020. DVERGE: diversifying vulnerabilities for enhanced robust generation of ensembles. Advances in Neural Information Processing Systems, Vol. 33 (2020), 5505--5515.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_99_1","doi-asserted-by":"publisher","DOI":"10.1145\/3196709.3196730"},{"key":"e_1_2_1_100_1","doi-asserted-by":"publisher","DOI":"10.1145\/3196709.3196729"},{"key":"e_1_2_1_101_1","unstructured":"Omar Zaidan Jason Eisner and Christine Piatko. 2007. Using ?annotator rationales\" to improve machine learning for text categorization. In Human language technologies 2007: The conference of the North American chapter of the association for computational linguistics; proceedings of the main conference. 260--267."},{"key":"e_1_2_1_102_1","doi-asserted-by":"publisher","DOI":"10.1145\/3392826"},{"key":"e_1_2_1_103_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359158"},{"key":"e_1_2_1_104_1","volume-title":"Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457","author":"Zhao Jieyu","year":"2017","unstructured":"Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457 (2017)."}],"container-title":["Proceedings of the ACM on Human-Computer Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3610187","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3610187","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3610187","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,21]],"date-time":"2025-08-21T04:26:57Z","timestamp":1755750417000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3610187"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,28]]},"references-count":104,"journal-issue":{"issue":"CSCW2","published-print":{"date-parts":[[2023,9,28]]}},"alternative-id":["10.1145\/3610187"],"URL":"https:\/\/doi.org\/10.1145\/3610187","relation":{},"ISSN":["2573-0142"],"issn-type":[{"value":"2573-0142","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,9,28]]},"assertion":[{"value":"2023-10-04","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}