{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,16]],"date-time":"2026-04-16T02:42:56Z","timestamp":1776307376909,"version":"3.50.1"},"reference-count":74,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2025,5,2]],"date-time":"2025-05-02T00:00:00Z","timestamp":1746144000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Hum.-Comput. Interact."],"published-print":{"date-parts":[[2025,5,2]]},"abstract":"<jats:p>Large Language Models (LLMs) have gained attention in research and industry, aiming to streamline processes and enhance text analysis performance. Thematic Analysis (TA), a prevalent qualitative method for analyzing interview content, often requires at least two human experts to review and analyze data. This study demonstrates the feasibility of LLM-Assisted Thematic Analysis (LATA) using GPT-4 and Gemini. Specifically, we conducted semi-structured interviews with 14 researchers to gather insights on their experiences generating and analyzing Online Social Network (OSN) communications datasets. Following Braun and Clarke's six-phase TA framework with an inductive approach, we initially analyzed our interview transcripts with human experts. Subsequently, we iteratively designed prompts to guide LLMs through a similar process. We compare and discuss the manually analyzed outcomes with responses generated by LLMs and achieve a cosine similarity score up to 0.76, demonstrating a promising prospect for LATA. Additionally, the study delves into researchers' experiences navigating the complexities of collecting and analyzing OSN data, offering recommendations for future research and application designers.<\/jats:p>","DOI":"10.1145\/3711022","type":"journal-article","created":{"date-parts":[[2025,5,3]],"date-time":"2025-05-03T01:35:05Z","timestamp":1746236105000},"page":"1-28","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":12,"title":["LATA: A Pilot Study on LLM-Assisted Thematic Analysis of Online Social Network Data Generation Experiences"],"prefix":"10.1145","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0308-6033","authenticated-orcid":false,"given":"Qile","family":"Wang","sequence":"first","affiliation":[{"name":"University of Delaware, Newark, Delaware, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-9451-1361","authenticated-orcid":false,"given":"Moath","family":"Erqsous","sequence":"additional","affiliation":[{"name":"University of Delaware, Newark, Delaware, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0936-7840","authenticated-orcid":false,"given":"Kenneth E.","family":"Barner","sequence":"additional","affiliation":[{"name":"University of Delaware, Newark, Delaware, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5359-6520","authenticated-orcid":false,"given":"Matthew Louis","family":"Mauriello","sequence":"additional","affiliation":[{"name":"University of Delaware, Newark, Delaware, USA"}]}],"member":"320","published-online":{"date-parts":[[2025,5,2]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1177\/1609406920984608"},{"key":"e_1_2_2_2_1","volume-title":"Maria Korobeynikova, and Fabrizio Gilardi.","author":"Alizadeh Meysam","year":"2023","unstructured":"Meysam Alizadeh, Ma\u00ebl Kubli, Zeynab Samei, Shirin Dehghani, Juan Diego Bermeo, Maria Korobeynikova, and Fabrizio Gilardi. 2023. Open-source large language models outperform crowd workers and approach ChatGPT in text-annotation tasks. arXiv preprint arXiv:2307.02179 (2023)."},{"key":"e_1_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.2196\/jmir.3050"},{"key":"e_1_2_2_4_1","unstructured":"Simon Arvidsson and Johan Axell. 2023. Prompt engineering guidelines for LLMs in Requirements Engineering. (2023)."},{"key":"e_1_2_2_5_1","doi-asserted-by":"crossref","unstructured":"Julian Ashwin Aditya Chhabra and Vijayendra Rao. 2023. Using Large Language Models for Qualitative Analysis can Introduce Serious Bias. arXiv:2309.17147 [cs.CL]","DOI":"10.1596\/1813-9450-10597"},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.procs.2015.08.345"},{"key":"e_1_2_2_7_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.comcom.2015.09.020"},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1191\/1478088706qp063oa"},{"key":"e_1_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1007\/978--1-4614--5583--7_311"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1057\/9781137453662_3"},{"key":"e_1_2_2_11_1","volume-title":"Chateval: Towards better llm-based evaluators through multi-agent debate. arXiv preprint arXiv:2308.07201","author":"Chan Chi-Min","year":"2023","unstructured":"Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. 2023. Chateval: Towards better llm-based evaluators through multi-agent debate. arXiv preprint arXiv:2308.07201 (2023)."},{"key":"e_1_2_2_12_1","volume-title":"Chien-Sheng Wu, et al.","author":"Chen Xiang'Anthony'","year":"2023","unstructured":"Xiang'Anthony' Chen, Jeff Burke, Ruofei Du, Matthew K Hong, Jennifer Jacobs, Philippe Laban, Dingzeyu Li, Nanyun Peng, Karl DD Willis, Chien-Sheng Wu, et al. 2023. Next Steps for Human-Centered Generative AI: A Technical Perspective. arXiv preprint arXiv:2306.15774 (2023). arXiv:2306.15774"},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.2316205120"},{"key":"e_1_2_2_14_1","volume-title":"LLM-Assisted Content Analysis: Using Large Language Models to Support Deductive Coding. arXiv preprint arXiv:2306.14924","author":"Chew Robert","year":"2023","unstructured":"Robert Chew, John Bollenbacher, MichaelWenger, Jessica Speer, and Annice Kim. 2023. LLM-Assisted Content Analysis: Using Large Language Models to Support Deductive Coding. arXiv preprint arXiv:2306.14924 (2023)."},{"key":"e_1_2_2_15_1","volume-title":"LLM-in-the-loop: Leveraging Large Language Model for Thematic Analysis. arXiv preprint arXiv:2310.15100","author":"Dai Shih-Chieh","year":"2023","unstructured":"Shih-Chieh Dai, Aiping Xiong, and Lun-Wei Ku. 2023. LLM-in-the-loop: Leveraging Large Language Model for Thematic Analysis. arXiv preprint arXiv:2310.15100 (2023)."},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3406865.3418333"},{"key":"e_1_2_2_17_1","volume-title":"Performing an Inductive Thematic Analysis of Semi-Structured Interviews With a Large Language Model: An Exploration and Provocation on the Limits of the Approach. Social Science Computer Review","author":"Paoli Stefano De","year":"2023","unstructured":"Stefano De Paoli. 2023. Performing an Inductive Thematic Analysis of Semi-Structured Interviews With a Large Language Model: An Exploration and Provocation on the Limits of the Approach. Social Science Computer Review (2023), 08944393231220483."},{"key":"e_1_2_2_18_1","volume-title":"Writing user personas with Large Language Models: Testing phase 6 of a Thematic Analysis of semi-structured interviews. arXiv preprint arXiv:2305.18099","author":"Paoli Stefano De","year":"2023","unstructured":"Stefano De Paoli. 2023. Writing user personas with Large Language Models: Testing phase 6 of a Thematic Analysis of semi-structured interviews. arXiv preprint arXiv:2305.18099 (2023). arXiv:2305.18099"},{"key":"e_1_2_2_19_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)."},{"key":"e_1_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.4102\/sajim.v11i1.397"},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1080\/10361146.2014.900530"},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.31235\/osf.io\/56f4q"},{"key":"e_1_2_2_23_1","first-page":"20","article-title":"ReCal: Intercoder reliability calculation as a web service","volume":"5","author":"Freelon Deen G","year":"2010","unstructured":"Deen G Freelon. 2010. ReCal: Intercoder reliability calculation as a web service. International Journal of Internet Science 5, 1 (2010), 20--33.","journal-title":"International Journal of Internet Science"},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1016\/s2212--5671(13)00166--4"},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3584931.3607500"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.chb.2018.08.039"},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3597503.3623306"},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.iheduc.2015.02.004"},{"key":"e_1_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.3390\/bdcc7020062"},{"key":"e_1_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1002\/asi.24368"},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.26599\/bdma.2020.9020006"},{"key":"e_1_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1186\/s40561-024-00310-z"},{"key":"e_1_2_2_33_1","unstructured":"Andrej Karpathy. 2023. \"The hottest new programming language is English\". https:\/\/twitter.com\/karpathy\/status\/ 1617979122625712128?lang=en [Twitter] Accessed: 2023--11--12."},{"key":"e_1_2_2_34_1","volume-title":"Thematic analysis of qualitative data: AMEE Guide No. 131. Medical teacher 42, 8","author":"Kiger Michelle E","year":"2020","unstructured":"Michelle E Kiger and Lara Varpio. 2020. Thematic analysis of qualitative data: AMEE Guide No. 131. Medical teacher 42, 8 (2020), 846--854."},{"key":"e_1_2_2_35_1","volume-title":"Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa.","author":"Kojima Takeshi","year":"2022","unstructured":"Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in neural information processing systems 35 (2022), 22199--22213."},{"key":"e_1_2_2_36_1","volume-title":"Shiv Vignesh Murty, and Swathy Ragupathy","author":"Kumar Ashutosh","year":"2024","unstructured":"Ashutosh Kumar, Sagarika Singh, Shiv Vignesh Murty, and Swathy Ragupathy. 2024. The Ethics of Interaction: Mitigating Security Threats in LLMs. ArXiv abs\/2401.12273 (2024). arXiv:2401.12273 https:\/\/api.semanticscholar.org\/ CorpusID:267095035"},{"key":"e_1_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.3390\/math11102320"},{"key":"e_1_2_2_38_1","volume-title":"Unlocking Health Literacy: The Ultimate Guide to Hypertension Education From ChatGPT Versus Google Gemini. Cureus 16","author":"Lee Thomas J","year":"2024","unstructured":"Thomas J Lee, Daniel J Campbell, Shriya Patel, Afif Hossain, Navid Radfar, Emaad Siddiqui, and Julius M Gardin. 2024. Unlocking Health Literacy: The Ultimate Guide to Hypertension Education From ChatGPT Versus Google Gemini. Cureus 16 (2024). https:\/\/api.semanticscholar.org\/CorpusID:269632563"},{"key":"e_1_2_2_39_1","volume-title":"Calibrating LLM-Based Evaluator. arXiv preprint arXiv:2309.13308","author":"Liu Yuxuan","year":"2023","unstructured":"Yuxuan Liu, Tianchi Yang, Shaohan Huang, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, and Qi Zhang. 2023. Calibrating LLM-Based Evaluator. arXiv preprint arXiv:2309.13308 (2023)."},{"key":"e_1_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.4333415"},{"key":"e_1_2_2_41_1","doi-asserted-by":"publisher","unstructured":"Ketut Mardiansyah and Wayan Surya. 2024. Comparative Analysis of ChatGPT-4 and Google Gemini for Spam Detection on the SpamAssassin Public Mail Corpus. (2024). doi:10.21203\/rs.3.rs-4005702\/v1","DOI":"10.21203\/rs.3.rs-4005702\/v1"},{"key":"e_1_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.compenvurbsys.2018.11.001"},{"key":"e_1_2_2_43_1","volume-title":"Interrater reliability: the kappa statistic. Biochemia medica 22, 3","author":"McHugh Mary L","year":"2012","unstructured":"Mary L McHugh. 2012. Interrater reliability: the kappa statistic. Biochemia medica 22, 3 (2012), 276--282. doi:10.11613\/ bm.2012.031"},{"key":"e_1_2_2_44_1","unstructured":"OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL]"},{"key":"e_1_2_2_45_1","doi-asserted-by":"publisher","DOI":"10.1145\/3544548.3580766"},{"key":"e_1_2_2_46_1","unstructured":"Giada Pistilli. 2022. What lies behind AGI: ethical concerns related to LLMs. https:\/\/api.semanticscholar.org\/CorpusID: 248913224"},{"key":"e_1_2_2_47_1","doi-asserted-by":"publisher","DOI":"10.1177\/15586898221126816"},{"key":"e_1_2_2_48_1","volume-title":"Gemini vs GPT-4V: A Preliminary Comparison and Combination of Vision-Language Models Through Qualitative Cases. ArXiv abs\/2312.15011","author":"Qi Zhangyang","year":"2023","unstructured":"Zhangyang Qi, Ye Fang, Mengchen Zhang, Zeyi Sun, Tong Wu, Ziwei Liu, Dahua Lin, Jiaqi Wang, and Hengshuang Zhao. 2023. Gemini vs GPT-4V: A Preliminary Comparison and Combination of Vision-Language Models Through Qualitative Cases. ArXiv abs\/2312.15011 (2023). arXiv:2312.15011 https:\/\/api.semanticscholar.org\/CorpusID:266550760"},{"key":"e_1_2_2_49_1","volume-title":"The Frontier of Data Erasure: Machine Unlearning for Large Language Models. ArXiv abs\/2403.15779","author":"Qu Youyang","year":"2024","unstructured":"Youyang Qu, Ming Ding, Nan Sun, Kanchana Thilakarathna, Tianqing Zhu, and Dusit Tao Niyato. 2024. The Frontier of Data Erasure: Machine Unlearning for Large Language Models. ArXiv abs\/2403.15779 (2024). arXiv:2403.15779 https:\/\/api.semanticscholar.org\/CorpusID:268681648"},{"key":"e_1_2_2_50_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.trc.2016.12.008"},{"key":"e_1_2_2_51_1","unstructured":"Machel Reid Nikolay Savinov Denis Teplyashin Dmitry Lepikhin Timothy Lillicrap Jean-baptiste Alayrac Radu Soricut Angeliki Lazaridou Orhan Firat Julian Schrittwieser et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530 (2024)."},{"key":"e_1_2_2_52_1","article-title":"War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education","volume":"6","author":"Rudolph J\u00fcrgen","year":"2023","unstructured":"J\u00fcrgen Rudolph, Shannon Tan, and Samson Tan. 2023. War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education. Journal of Applied Learning and Teaching 6, 1 (2023).","journal-title":"Journal of Applied Learning and Teaching"},{"key":"e_1_2_2_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445518"},{"key":"e_1_2_2_54_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.future.2016.10.019"},{"key":"e_1_2_2_55_1","doi-asserted-by":"publisher","DOI":"10.1145\/1644893.1644899"},{"key":"e_1_2_2_56_1","volume-title":"Joon Sung Park, and Diyi Yang.","author":"Shen Hong","year":"2023","unstructured":"Hong Shen, Tianshi Li, Toby Jia-Jun Li, Joon Sung Park, and Diyi Yang. 2023. Shaping the Emerging Norms of Using Large Language Models in Social Computing Research. In Companion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing. 569--571."},{"key":"e_1_2_2_57_1","doi-asserted-by":"publisher","DOI":"10.1177\/1609406915624574"},{"key":"e_1_2_2_58_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023"},{"key":"e_1_2_2_59_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijinfomgt.2017.12.002"},{"key":"e_1_2_2_60_1","doi-asserted-by":"publisher","DOI":"10.1109\/tvcg.2022.3209479"},{"key":"e_1_2_2_61_1","volume-title":"Alpaca: A strong, replicable instruction-following model","author":"Taori Rohan","year":"2023","unstructured":"Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Alpaca: A strong, replicable instruction-following model. Stanford Center for Research on Foundation Models. https:\/\/crfm. stanford. edu\/2023\/03\/13\/alpaca. html 3, 6 (2023), 7."},{"key":"e_1_2_2_62_1","volume-title":"Chatgpt-4 outperforms experts and crowd workers in annotating political twitter messages with zero-shot learning. arXiv preprint arXiv:2304.06588","author":"T\u00f6rnberg Petter","year":"2023","unstructured":"Petter T\u00f6rnberg. 2023. Chatgpt-4 outperforms experts and crowd workers in annotating political twitter messages with zero-shot learning. arXiv preprint arXiv:2304.06588 (2023)."},{"key":"e_1_2_2_63_1","volume-title":"Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971","author":"Touvron Hugo","year":"2023","unstructured":"Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023)."},{"key":"e_1_2_2_64_1","first-page":"24824","article-title":"Chain-of-thought prompting elicits reasoning in large language models","volume":"35","author":"Wei Jason","year":"2022","unstructured":"Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824--24837.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_2_65_1","volume-title":"A prompt pattern catalog to enhance prompt engineering with chatgpt. arXiv preprint arXiv:2302.11382","author":"White Jules","year":"2023","unstructured":"Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, and Douglas C Schmidt. 2023. A prompt pattern catalog to enhance prompt engineering with chatgpt. arXiv preprint arXiv:2302.11382 (2023)."},{"key":"e_1_2_2_66_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11616-023-00807--6"},{"key":"e_1_2_2_67_1","volume-title":"Safety and Ethical Concerns of Large Language Models. In China National Conference on Chinese Computational Linguistics. https:\/\/api.semanticscholar.org\/CorpusID:261341825","author":"Xi Zhiheng","year":"2023","unstructured":"Zhiheng Xi, Zheng Rui, and Gui Tao. 2023. Safety and Ethical Concerns of Large Language Models. In China National Conference on Chinese Computational Linguistics. https:\/\/api.semanticscholar.org\/CorpusID:261341825"},{"key":"e_1_2_2_68_1","doi-asserted-by":"publisher","DOI":"10.1145\/3581754.3584136"},{"key":"e_1_2_2_69_1","volume-title":"David Buttler, Anna Hiszpanski, et al.","author":"Yi Gyeong Hoon","year":"2024","unstructured":"Gyeong Hoon Yi, Jiwoo Choi, Hyeongyun Song, Olivia Miano, Jaewoong Choi, Kihoon Bang, Byungju Lee, Seok Su Sohn, David Buttler, Anna Hiszpanski, et al. 2024. MaTableGPT: GPT-based Table Data Extractor from Materials Science Literature. arXiv preprint arXiv:2406.05431 (2024)."},{"key":"e_1_2_2_70_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0197326"},{"key":"e_1_2_2_71_1","volume-title":"Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for Efficient Thematic Analysis. arXiv preprint arXiv:2309.10771","author":"Zhang He","year":"2023","unstructured":"He Zhang, Chuhao Wu, Jingyi Xie, Yao Lyu, Jie Cai, and John M Carroll. 2023. Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for Efficient Thematic Analysis. arXiv preprint arXiv:2309.10771 (2023)."},{"key":"e_1_2_2_72_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.fmre.2021.11.011"},{"key":"e_1_2_2_73_1","unstructured":"Wayne Xin Zhao Kun Zhou Junyi Li Tianyi Tang Xiaolei Wang Yupeng Hou Yingqian Min Beichen Zhang Junjie Zhang Zican Dong et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223 (2023)."},{"key":"e_1_2_2_74_1","volume-title":"Can gpt-4 perform neural architecture search? arXiv preprint arXiv:2304.10970","author":"Zheng Mingkai","year":"2023","unstructured":"Mingkai Zheng, Xiu Su, Shan You, Fei Wang, Chen Qian, Chang Xu, and Samuel Albanie. 2023. Can gpt-4 perform neural architecture search? arXiv preprint arXiv:2304.10970 (2023)."}],"container-title":["Proceedings of the ACM on Human-Computer Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3711022","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3711022","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,21]],"date-time":"2025-08-21T01:06:22Z","timestamp":1755738382000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3711022"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,2]]},"references-count":74,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2025,5,2]]}},"alternative-id":["10.1145\/3711022"],"URL":"https:\/\/doi.org\/10.1145\/3711022","relation":{},"ISSN":["2573-0142"],"issn-type":[{"value":"2573-0142","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,5,2]]},"assertion":[{"value":"2025-05-02","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}