{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,4]],"date-time":"2026-03-04T02:33:35Z","timestamp":1772591615717,"version":"3.50.1"},"reference-count":83,"publisher":"Association for Computing Machinery (ACM)","issue":"CSCW1","license":[{"start":{"date-parts":[[2024,4,17]],"date-time":"2024-04-17T00:00:00Z","timestamp":1713312000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100006374","name":"Hong Kong Government","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100006374","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Hum.-Comput. Interact."],"published-print":{"date-parts":[[2024,4,17]]},"abstract":"<jats:p>Prewriting is the process of discovering and developing ideas before writing a first draft, which requires divergent thinking and often implies unstructured strategies such as diagramming, outlining, free-writing, etc. Although large language models (LLMs) have been demonstrated to be useful for a variety of tasks including creative writing, little is known about how users would collaborate with LLMs to support prewriting. The preferred collaborative role and initiative of LLMs during such a creative process is also unclear. To investigate human-LLM collaboration patterns and dynamics during prewriting, we conducted a three-session qualitative study with 15 participants in two creative tasks: story writing and slogan writing. The findings indicated that during collaborative prewriting, there appears to be a three-stage iterative Human-AI Co-creativity process that includes Ideation, Illumination, and Implementation stages. This collaborative process champions the human in a dominant role, in addition to mixed and shifting levels of initiative that exist between humans and LLMs. This research also reports on collaboration breakdowns that occur during this process, user perceptions of using existing LLMs during Human-AI Co-creativity, and discusses design implications to support this co-creativity process.<\/jats:p>","DOI":"10.1145\/3637361","type":"journal-article","created":{"date-parts":[[2024,4,29]],"date-time":"2024-04-29T10:05:31Z","timestamp":1714385131000},"page":"1-26","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":46,"title":["\"It Felt Like Having a Second Mind\": Investigating Human-AI Co-creativity in Prewriting with Large Language Models"],"prefix":"10.1145","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-4250-8780","authenticated-orcid":false,"given":"Qian","family":"Wan","sequence":"first","affiliation":[{"name":"City University of Hong Kong, Hong Kong SAR, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3824-2801","authenticated-orcid":false,"given":"Siying","family":"Hu","sequence":"additional","affiliation":[{"name":"City University of Hong Kong, Hong Kong SAR, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8574-111X","authenticated-orcid":false,"given":"Yu","family":"Zhang","sequence":"additional","affiliation":[{"name":"City University of Hong Kong, Hong Kong SAR, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-8393-7101","authenticated-orcid":false,"given":"Piaohong","family":"Wang","sequence":"additional","affiliation":[{"name":"City University Of Hong Kong, Hong Kong SAR, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2287-473X","authenticated-orcid":false,"given":"Bo","family":"Wen","sequence":"additional","affiliation":[{"name":"University of Macau, Macau SAR, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7761-6351","authenticated-orcid":false,"given":"Zhicong","family":"Lu","sequence":"additional","affiliation":[{"name":"City University of Hong Kong, Hong Kong SAR, China"}]}],"member":"320","published-online":{"date-parts":[[2024,4,26]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300233"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1147\/JRD.2019.2942288"},{"key":"e_1_2_1_3_1","volume-title":"Ask Me Anything: A simple strategy for prompting language models. arXiv preprint arXiv:2210.02441","author":"Arora Simran","year":"2022","unstructured":"Simran Arora, Avanika Narayan, Mayee F Chen, Laurel J Orr, Neel Guha, Kush Bhatia, Ines Chami, Frederic Sala, and Christopher R\u00e9. 2022. Ask Me Anything: A simple strategy for prompting language models. arXiv preprint arXiv:2210.02441 (2022)."},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3415167"},{"key":"e_1_2_1_5_1","first-page":"45","article-title":"A Procedural Approach to Process Theory of Writing","volume":"24","author":"Baroudy Ismail","year":"2008","unstructured":"Ismail Baroudy. 2008. A Procedural Approach to Process Theory of Writing: Prewriting Techniques. The International Journal of Language Society and Culture, Vol. 24, 4 (2008), 45--52.","journal-title":"Prewriting Techniques. The International Journal of Language Society and Culture"},{"key":"e_1_2_1_6_1","volume-title":"Thinking aloud: Dynamic context generation improves zero-shot reasoning performance of gpt-2. arXiv preprint arXiv:2103.13033","author":"Betz Gregor","year":"2021","unstructured":"Gregor Betz, Kyle Richardson, and Christian Voigt. 2021. Thinking aloud: Dynamic context generation improves zero-shot reasoning performance of gpt-2. arXiv preprint arXiv:2103.13033 (2021)."},{"key":"e_1_2_1_7_1","unstructured":"Rishi Bommasani Drew A Hudson Ehsan Adeli Russ Altman Simran Arora Sydney von Arx Michael S Bernstein Jeannette Bohg Antoine Bosselut Emma Brunskill et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021)."},{"key":"e_1_2_1_8_1","unstructured":"Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell et al. 2020. Language models are few-shot learners. Advances in neural information processing systems Vol. 33 (2020) 1877--1901."},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300234"},{"key":"e_1_2_1_10_1","volume-title":"Blind variation and selective retentions in creative thought as in other knowledge processes. Psychological review","author":"Campbell Donald T","year":"1960","unstructured":"Donald T Campbell. 1960. Blind variation and selective retentions in creative thought as in other knowledge processes. Psychological review, Vol. 67, 6 (1960), 380."},{"key":"e_1_2_1_11_1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"6","author":"Caramiaux Baptiste","year":"2022","unstructured":"Baptiste Caramiaux and Sarah Fdili Alaoui. 2022. \" Explorers of Unknown Planets\" Practices and Politics of Artificial Intelligence in Visual Arts. Proceedings of the ACM on Human-Computer Interaction, Vol. 6, CSCW2 (2022), 1--24."},{"key":"e_1_2_1_12_1","unstructured":"Kathy Charmaz. 2006. Constructing grounded theory: A practical guide through qualitative analysis. sage."},{"key":"e_1_2_1_13_1","volume-title":"TaleBrush: Sketching Stories with Generative Pretrained Language Models. In CHI Conference on Human Factors in Computing Systems. 1--19","author":"Young Chung John Joon","year":"2022","unstructured":"John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, and Minsuk Chang. 2022. TaleBrush: Sketching Stories with Generative Pretrained Language Models. In CHI Conference on Human Factors in Computing Systems. 1--19."},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3172944.3172983"},{"key":"e_1_2_1_15_1","volume-title":"Basics of qualitative research: Techniques and procedures for developing grounded theory","author":"Corbin Juliet","unstructured":"Juliet Corbin and Anselm Strauss. 2014. Basics of qualitative research: Techniques and procedures for developing grounded theory. Sage publications."},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1017\/S1351324920000601"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3526113.3545672"},{"key":"e_1_2_1_18_1","volume-title":"How to Prompt? Opportunities and Challenges of Zero-and Few-Shot Learning for Human-AI Interaction in Creative Applications of Generative Models. arXiv preprint arXiv:2209.01390","author":"Dang Hai","year":"2022","unstructured":"Hai Dang, Lukas Mecke, Florian Lehmann, Sven Goller, and Daniel Buschek. 2022b. How to Prompt? Opportunities and Challenges of Zero-and Few-Shot Learning for Human-AI Interaction in Creative Applications of Generative Models. arXiv preprint arXiv:2209.01390 (2022)."},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/2856767.2856795"},{"key":"e_1_2_1_20_1","volume-title":"Kunwar Yashraj Singh, and Brian Magerko","author":"Davis Nicholas Mark","year":"2016","unstructured":"Nicholas Mark Davis, Chih-Pin Hsiao, Kunwar Yashraj Singh, and Brian Magerko. 2016a. Co-creative drawing agent with object recognition. In Twelfth artificial intelligence and interactive digital entertainment conference."},{"key":"e_1_2_1_21_1","first-page":"31","article-title":"Examining copyright protection of AI-generated art","volume":"1","author":"Dee Celine Melanie A","year":"2018","unstructured":"Celine Melanie A Dee. 2018. Examining copyright protection of AI-generated art. Delphi , Vol. 1 (2018), 31.","journal-title":"Delphi"},{"key":"e_1_2_1_22_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)."},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/1879831.1879836"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/1640233.1640260"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3027063.3051137"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.2307\/356600"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300619"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1177\/0276237421994697"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300526"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/3532106.3533533"},{"key":"e_1_2_1_31_1","volume-title":"GPT-3 Creative Fiction. https:\/\/www.gwern.net\/GPT-3 Retrieved","author":"Gwern Branwen","year":"2023","unstructured":"Branwen Gwern. 2022. GPT-3 Creative Fiction. https:\/\/www.gwern.net\/GPT-3 Retrieved Jan 10, 2023 from"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/3334480.3383051"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/302979.303030"},{"key":"e_1_2_1_34_1","volume-title":"Effective human--AI work design for collaborative decision-making. Kybernetes ahead-of-print","author":"Jain Ruchika","year":"2022","unstructured":"Ruchika Jain, Naval Garg, and Shikha N Khera. 2022. Effective human--AI work design for collaborative decision-making. Kybernetes ahead-of-print (2022)."},{"key":"e_1_2_1_35_1","volume-title":"Design fixation. Design studies","author":"Jansson David G","year":"1991","unstructured":"David G Jansson and Steven M Smith. 1991. Design fixation. Design studies, Vol. 12, 1 (1991), 3--11."},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445093"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445890"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3392850"},{"key":"e_1_2_1_39_1","volume-title":"Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa.","author":"Kojima Takeshi","year":"2022","unstructured":"Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large Language Models are Zero-Shot Reasoners. arXiv preprint arXiv:2205.11916 (2022)."},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3501999"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445472"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/2010324.1964922"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376590"},{"key":"e_1_2_1_44_1","volume-title":"Question-driven design process for explainable ai user experiences. arXiv preprint arXiv:2104.03483","author":"Liao Q Vera","year":"2021","unstructured":"Q Vera Liao, Milena Pribi\u0107, Jaesik Han, Sarah Miller, and Daby Sow. 2021. Question-driven design process for explainable ai user experiences. arXiv preprint arXiv:2104.03483 (2021)."},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.3389\/frobt.2021.719944"},{"key":"e_1_2_1_46_1","volume-title":"What Makes Good In-Context Examples for GPT-$3 $? arXiv preprint arXiv:2101.06804","author":"Liu Jiachang","year":"2021","unstructured":"Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. What Makes Good In-Context Examples for GPT-$3 $? arXiv preprint arXiv:2101.06804 (2021)."},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376739"},{"key":"e_1_2_1_48_1","volume-title":"Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. arXiv preprint arXiv:2104.08786","author":"Lu Yao","year":"2021","unstructured":"Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2021. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. arXiv preprint arXiv:2104.08786 (2021)."},{"key":"e_1_2_1_49_1","volume-title":"Inkplanner: Supporting prewriting via intelligent visual diagramming","author":"Lu Zhicong","year":"2018","unstructured":"Zhicong Lu, Mingming Fan, Yun Wang, Jian Zhao, Michelle Annett, and Daniel Wigdor. 2018. Inkplanner: Supporting prewriting via intelligent visual diagramming. IEEE transactions on visualization and computer graphics, Vol. 25, 1 (2018), 277--287."},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359297"},{"key":"e_1_2_1_51_1","volume-title":"Cross-task generalization via natural language crowdsourcing instructions. arXiv preprint arXiv:2104.08773","author":"Mishra Swaroop","year":"2021","unstructured":"Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2021. Cross-task generalization via natural language crowdsourcing instructions. arXiv preprint arXiv:2104.08773 (2021)."},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3357236.3395454"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3174223"},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1007\/s41469-021-00095-2"},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411763.3451760"},{"key":"e_1_2_1_56_1","volume-title":"Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems. ACM Transactions on Computer-Human Interaction","author":"Rezwana Jeba","year":"2022","unstructured":"Jeba Rezwana and Mary Lou Maher. 2022. Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems. ACM Transactions on Computer-Human Interaction (2022)."},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1145\/3180308.3180329"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.2307\/354885"},{"key":"e_1_2_1_59_1","unstructured":"MA Runco. 2014. Creativity: theories and themes: Research development and practice."},{"key":"e_1_2_1_60_1","volume-title":"The standard definition of creativity. Creativity research journal","author":"Runco Mark A","year":"2012","unstructured":"Mark A Runco and Garrett J Jaeger. 2012. The standard definition of creativity. Creativity research journal , Vol. 24, 1 (2012), 92--96."},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1145\/2702123.2702383"},{"key":"e_1_2_1_62_1","volume-title":"Explaining creativity: The science of human innovation","author":"Sawyer R Keith","unstructured":"R Keith Sawyer. 2011. Explaining creativity: The science of human innovation. Oxford university press."},{"key":"e_1_2_1_63_1","volume-title":"Eric Wallace, and Sameer Singh.","author":"Shin Taylor","year":"2020","unstructured":"Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980 (2020)."},{"key":"e_1_2_1_64_1","volume-title":"Frontiers of human-centered computing, online communities and virtual environments","author":"Shneiderman Ben","unstructured":"Ben Shneiderman. 2001. Supporting creativity with advanced information-abundant user interfaces. In Frontiers of human-centered computing, online communities and virtual environments. Springer, 469--480."},{"key":"e_1_2_1_65_1","volume-title":"Direct manipulation vs. interface agents. interactions","author":"Shneiderman Ben","year":"1997","unstructured":"Ben Shneiderman and Pattie Maes. 1997. Direct manipulation vs. interface agents. interactions, Vol. 4, 6 (1997), 42--61."},{"key":"e_1_2_1_66_1","volume-title":"The blind-variation and selective-retention theory of creativity: Recent developments and current status of BVSR. Creativity Research Journal","author":"Simonton Dean Keith","year":"2022","unstructured":"Dean Keith Simonton. 2022. The blind-variation and selective-retention theory of creativity: Recent developments and current status of BVSR. Creativity Research Journal (2022), 1--20."},{"key":"e_1_2_1_67_1","volume-title":"Where to hide a stolen elephant: Leaps in creative writing with multimodal machine intelligence. ACM Transactions on Computer-Human Interaction","author":"Singh Nikhil","year":"2022","unstructured":"Nikhil Singh, Guillermo Bernal, Daria Savchenko, and Elena L Glassman. 2022. Where to hide a stolen elephant: Leaps in creative writing with multimodal machine intelligence. ACM Transactions on Computer-Human Interaction (2022)."},{"key":"e_1_2_1_68_1","volume-title":"The concept of creativity: Prospects and paradigms. Handbook of creativity","author":"Sternberg Robert J","year":"1999","unstructured":"Robert J Sternberg and Todd I Lubart. 1999. The concept of creativity: Prospects and paradigms. Handbook of creativity, Vol. 1, 3--15 (1999)."},{"key":"e_1_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1145\/3490099.3511119"},{"key":"e_1_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.eacl-demos.29"},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1609\/aimag.v28i2.2035"},{"key":"e_1_2_1_72_1","volume-title":"The art of thought","author":"Wallas Graham","unstructured":"Graham Wallas. 1926. The art of thought. Vol. 10. Harcourt, Brace."},{"key":"e_1_2_1_73_1","doi-asserted-by":"crossref","unstructured":"Dakuo Wang Elizabeth Churchill Pattie Maes Xiangmin Fan Ben Shneiderman Yuanchun Shi and Qianying Wang. 2020. From human-human collaboration to Human-AI collaboration: Designing AI systems that can work together with people. In Extended abstracts of the 2020 CHI conference on human factors in computing systems. 1--6.","DOI":"10.1145\/3334480.3381069"},{"key":"e_1_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359313"},{"key":"e_1_2_1_75_1","unstructured":"Jason Wei Yi Tay Rishi Bommasani Colin Raffel Barret Zoph Sebastian Borgeaud Dani Yogatama Maarten Bosma Denny Zhou Donald Metzler et al. 2022. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682 (2022)."},{"key":"e_1_2_1_76_1","doi-asserted-by":"publisher","DOI":"10.1145\/3397481.3450656"},{"key":"e_1_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491101.3519729"},{"key":"e_1_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3517582"},{"key":"e_1_2_1_79_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3502075"},{"key":"e_1_2_1_80_1","doi-asserted-by":"publisher","DOI":"10.1145\/3490099.3511105"},{"key":"e_1_2_1_81_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3501914"},{"key":"e_1_2_1_82_1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"4","author":"Zhang Rui","year":"2021","unstructured":"Rui Zhang, Nathan J McNeese, Guo Freeman, and Geoff Musick. 2021. \" An Ideal Human\" Expectations of AI Teammates in Human-AI Teaming. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW3 (2021), 1--25."},{"key":"e_1_2_1_83_1","volume-title":"Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba.","author":"Zhou Yongchao","year":"2022","unstructured":"Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2022. Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910 (2022). io"}],"container-title":["Proceedings of the ACM on Human-Computer Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3637361","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3637361","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,22]],"date-time":"2025-08-22T17:29:29Z","timestamp":1755883769000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3637361"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,4,17]]},"references-count":83,"journal-issue":{"issue":"CSCW1","published-print":{"date-parts":[[2024,4,17]]}},"alternative-id":["10.1145\/3637361"],"URL":"https:\/\/doi.org\/10.1145\/3637361","relation":{},"ISSN":["2573-0142"],"issn-type":[{"value":"2573-0142","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,4,17]]},"assertion":[{"value":"2024-04-26","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}