{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,16]],"date-time":"2026-03-16T18:49:16Z","timestamp":1773686956558,"version":"3.50.1"},"reference-count":80,"publisher":"Association for Computing Machinery (ACM)","issue":"1","funder":[{"DOI":"10.13039\/501100002920","name":"Research Grants Council, University Grants Committee","doi-asserted-by":"publisher","award":["14207123"],"award-info":[{"award-number":["14207123"]}],"id":[{"id":"10.13039\/501100002920","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100002920","name":"Research Grants Council, University Grants Committee","doi-asserted-by":"publisher","award":["C4072-21G"],"award-info":[{"award-number":["C4072-21G"]}],"id":[{"id":"10.13039\/501100002920","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."],"published-print":{"date-parts":[[2026,3,16]]},"abstract":"<jats:p>In crowded social settings like conferences, background noise, overlapping voices, and lively interactions often lead to \u201ccocktail party deafness,\u201d hindering clear conversation. While modern earphones are a promising platform for speech enhancement, existing solutions are limited: they either operate on a single device, ignoring the multi-party nature of conversation, or rely on impractical assumptions like fixed conversation areas and pre-recorded audio. We present CoHear, a collaborative system that leverages a network of earphones to holistically model and enhance speech at the conversation level. CoHear bridges acoustic sensor networks with deep learning for target speech extraction through two key contributions: 1) a novel, conversation-driven network that dynamically forms groups based on user interaction, using verbal and non-verbal cues (primarily head orientation) for robust, infrastructure-free coordination; and 2) a bandwidth-efficient, robust target speech extraction model that effectively utilizes peer-relayed audio as conditioning signals, even under network constraints. CoHear is evaluated in both real-world experiments and simulations. Results show that our conversation network obtains more than 90% accuracy in group formation, improves the speech quality by up to 8.8 dB over state-of-the-art baselines, and demonstrates real-time performance on a mobile device. In a user study with 20 participants, CoHear has a much higher score than baseline with good usability.<\/jats:p>","DOI":"10.1145\/3789685","type":"journal-article","created":{"date-parts":[[2026,3,16]],"date-time":"2026-03-16T17:51:14Z","timestamp":1773683474000},"page":"1-29","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["CoHear: Conversation Enhancement via Multi-earphone Collaboration"],"prefix":"10.1145","volume":"10","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2130-1385","authenticated-orcid":false,"given":"Lixing","family":"He","sequence":"first","affiliation":[{"name":"Department of Information Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1852-3825","authenticated-orcid":false,"given":"Yunqi","family":"Guo","sequence":"additional","affiliation":[{"name":"Department of Information Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4433-5211","authenticated-orcid":false,"given":"Zhenyu","family":"Yan","sequence":"additional","affiliation":[{"name":"Department of Information Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1772-7751","authenticated-orcid":false,"given":"Guoliang","family":"Xing","sequence":"additional","affiliation":[{"name":"Department of Information Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong"}]}],"member":"320","published-online":{"date-parts":[[2026,3,16]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/3379337.3415588"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/SCVT.2011.6101302"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3453182"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.21437\/Interspeech.2023-105"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP39728.2021.9413473"},{"key":"e_1_2_1_6_1","volume-title":"Turn-taking prediction for natural conversational speech. arXiv preprint arXiv:2208.13321","author":"Li Bo","year":"2022","unstructured":"Shuo-yiin Chang, Bo Li, Tara N Sainath, Chao Zhang, Trevor Strohman, Qiao Liang, and Yanzhang He. 2022. Turn-taking prediction for natural conversational speech. arXiv preprint arXiv:2208.13321 (2022)."},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3498361.3538933"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3544216.3544258"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3603269.3604851"},{"key":"e_1_2_1_10_1","volume-title":"Takuya Yoshioka, and Shyamnath Gollakota.","author":"Chen Tuochao","year":"2024","unstructured":"Tuochao Chen, Malek Itani, Sefik Emre Eskimez, Takuya Yoshioka, and Shyamnath Gollakota. 2024. Hearable devices with sound bubbles. Nature Electronics (2024), 1\u201312."},{"key":"e_1_2_1_11_1","volume-title":"Takuya Yoshioka, and Shyamnath Gollakota.","author":"Chen Tuochao","year":"2024","unstructured":"Tuochao Chen, Qirui Wang, Bohan Wu, Malek Itani, Sefik Emre Eskimez, Takuya Yoshioka, and Shyamnath Gollakota. 2024. Target conversation extraction: Source separation using turn-taking dynamics. arXiv preprint arXiv:2407.11277 (2024)."},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1121\/1.1907229"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/3613904.3642095"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3714394.3754429"},{"key":"e_1_2_1_15_1","volume-title":"Vamsi Krishna Ithapu, and Ravish Mehra","author":"Donley Jacob","year":"2021","unstructured":"Jacob Donley, Vladimir Tourbabin, Jung-Suk Lee, Mark Broyles, Hao Jiang, Jie Shen, Maja Pantic, Vamsi Krishna Ithapu, and Ravish Mehra. 2021. Easycom: An augmented reality dataset to support algorithms for easy communication in noisy environments. arXiv preprint arXiv:2107.04174 (2021)."},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3631447"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3643832.3661860"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11257-010-9074-4"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/TASLP.2018.2828321"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/358669.358692"},{"key":"e_1_2_1_21_1","volume-title":"Proceedings of the 24th Forum on Information Technology (FIT2025)","author":"Gao Tian","year":"2025","unstructured":"Tian Gao, Xuefu Dong, Akihito Taya, Yuuki Nishiyama, and Kaoru Sezaki. 2025. Expression Recognition Based on Ear Canal Shape Detection Using Earbud and Ultrasound. In Proceedings of the 24th Forum on Information Technology (FIT2025). Sapporo, Japan."},{"key":"e_1_2_1_22_1","volume-title":"Funasr: A fundamental end-to-end speech recognition toolkit. arXiv preprint arXiv:2305.11013","author":"Gao Zhifu","year":"2023","unstructured":"Zhifu Gao, Zerui Li, Jiaming Wang, Haoneng Luo, Xian Shi, Mengzhe Chen, Yabin Li, Lingyun Zuo, Zhihao Du, Zhangyu Xiao, et al. 2023. Funasr: A fundamental end-to-end speech recognition toolkit. arXiv preprint arXiv:2305.11013 (2023)."},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1186\/s13636-021-00210-x"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP39728.2021.9413831"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP43922.2022.9746284"},{"key":"e_1_2_1_26_1","volume-title":"On the Integration of Sampling Rate Synchronization and Acoustic Beamforming. In 2023 31st European Signal Processing Conference (EUSIPCO). IEEE, 11\u201315","author":"Gburrek Tobias","year":"2023","unstructured":"Tobias Gburrek, Joerg Schmalenstroeer, and Reinhold Haeb-Umbach. 2023. On the Integration of Sampling Rate Synchronization and Acoustic Beamforming. In 2023 31st European Signal Processing Conference (EUSIPCO). IEEE, 11\u201315."},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287041"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2017.7952261"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01842"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCS.2006.301514"},{"key":"e_1_2_1_31_1","volume-title":"Audio Engineering Society Conference: 2020 AES International Conference on Audio for Virtual and Augmented Reality. Audio Engineering Society.","author":"Gupta Rishabh","year":"2020","unstructured":"Rishabh Gupta, Rishabh Ranjan, Jianjun He, Woon-Seng Gan, and Santi Peksi. 2020. Acoustic transparency in hearables for augmented reality audio: Hear-through techniques review and challenges. In Audio Engineering Society Conference: 2020 AES International Conference on Audio for Virtual and Augmented Reality. Audio Engineering Society."},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/3678594"},{"key":"e_1_2_1_33_1","volume-title":"HeadSense: Visual Search Monitoring and Distracted Behavior Detection for Bicycle Riders. In 2023 IEEE 24th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM)","author":"Han Zengyi","unstructured":"Zengyi Han, Xuefu Dong, Yuuki Nishiyama, and Kaoru Sezaki. 2023. HeadSense: Visual Search Monitoring and Distracted Behavior Detection for Bicycle Riders. In 2023 IEEE 24th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM). IEEE, 281\u2013289."},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/PERCOM56429.2023.10099215"},{"key":"e_1_2_1_35_1","first-page":"618","article-title":"Augmented reality audio for mobile and wearable appliances","volume":"52","author":"H\u00e4rm\u00e4 Aki","year":"2004","unstructured":"Aki H\u00e4rm\u00e4, Julia Jakka, Miikka Tikander, Matti Karjalainen, Tapio Lokki, Jarmo Hiipakka, and Ga\u00ebtan Lorho. 2004. Augmented reality audio for mobile and wearable appliances. Journal of the Audio Engineering Society 52, 6 (2004), 618\u2013639.","journal-title":"Journal of the Audio Engineering Society"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3581791.3596832"},{"key":"e_1_2_1_37_1","volume-title":"EmbodiedSense: Understanding Embodied Activities with Earphones. arXiv preprint arXiv:2504.02624","author":"He Lixing","year":"2025","unstructured":"Lixing He, Bufang Yang, Di Duan, Zhenyu Yan, and Guoliang Xing. 2025. EmbodiedSense: Understanding Embodied Activities with Earphones. arXiv preprint arXiv:2504.02624 (2025)."},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/SSP.2009.5278589"},{"key":"e_1_2_1_39_1","volume-title":"Wireless Hearables With Programmable Speech AI Accelerators. arXiv preprint arXiv:2503.18698","author":"Itani Malek","year":"2025","unstructured":"Malek Itani, Tuochao Chen, Arun Raghavan, Gavriel Kohlberg, and Shyamnath Gollakota. 2025. Wireless Hearables With Programmable Speech AI Accelerators. arXiv preprint arXiv:2503.18698 (2025)."},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41467-023-40869-8"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3341302.3342091"},{"key":"e_1_2_1_42_1","first-page":"20925","article-title":"The cone of silence: Speech separation by localization","volume":"33","author":"Jenrungrot Teerapat","year":"2020","unstructured":"Teerapat Jenrungrot, Vivek Jayaram, Steve Seitz, and Ira Kemelmacher-Shlizerman. 2020. The cone of silence: Speech separation by localization. Advances in Neural Information Processing Systems 33 (2020), 20925\u201320938.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/TASSP.1976.1162830"},{"key":"e_1_2_1_44_1","volume-title":"The AMI meeting corpus. In Proc. International Conference on Methods and Techniques in Behavioral Research. 1\u20134.","author":"Kraaij Wessel","year":"2005","unstructured":"Wessel Kraaij, Thomas Hain, Mike Lincoln, and Wilfried Post. 2005. The AMI meeting corpus. In Proc. International Conference on Methods and Techniques in Behavioral Research. 1\u20134."},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/WASPAA52581.2021.9632775"},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/3161170"},{"key":"e_1_2_1_47_1","volume-title":"Timing in turn-taking and its implications for processing models of language. Frontiers in psychology 6","author":"Levinson Stephen C","year":"2015","unstructured":"Stephen C Levinson and Francisco Torreira. 2015. Timing in turn-taking and its implications for processing models of language. Frontiers in psychology 6 (2015), 731."},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.21437\/Interspeech.2022-10894"},{"key":"e_1_2_1_49_1","unstructured":"Livekit. [n. d.]. End-to-end stack for WebRTC. SFU media server and SDKs. https:\/\/github.com\/livekit\/livekit?tab=readme-ov-file"},{"key":"e_1_2_1_50_1","volume-title":"Earda: Towards accurate and data-efficient earable activity sensing. In 2024 IEEE Coupling of Sensing & Computing in AIoT Systems (CSCAIoT)","author":"Lyu Shengzhe","year":"2024","unstructured":"Shengzhe Lyu, Yongliang Chen, Di Duan, Renqi Jia, and Weitao Xu. 2024. Earda: Towards accurate and data-efficient earable activity sensing. In 2024 IEEE Coupling of Sensing & Computing in AIoT Systems (CSCAIoT). IEEE, 1\u20137."},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP43922.2022.9746132"},{"key":"e_1_2_1_52_1","unstructured":"Patently Apple. 2025. In 2024 the Global Smart Personal Audio Devices Market Achieved 11.2% Growth with Apple the Runaway Leader. https:\/\/www.patentlyapple.com\/2025\/03\/in-2024-the-global-smart-personal-audio-devices-market-achieved-112-growth-with-apple-the-runaway-leader.html Accessed: 2025-05-02."},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1121\/1.1919140"},{"key":"e_1_2_1_54_1","volume-title":"Companion to clinical neurology","author":"Pryse-Phillips William","unstructured":"William Pryse-Phillips. 2009. Companion to clinical neurology. Oxford University Press."},{"key":"e_1_2_1_55_1","volume-title":"18th USENIX Symposium on Networked Systems Design and Implementation (NSDI 21)","author":"Qian Kun","year":"2021","unstructured":"Kun Qian, Yumeng Lu, Zheng Yang, Kai Zhang, Kehong Huang, Xinjun Cai, Chenshu Wu, and Yunhao Liu. 2021. {AIRCODE}: Hidden {Screen-Camera} communication on an invisible and inaudible dual channel. In 18th USENIX Symposium on Networked Systems Design and Implementation (NSDI 21). 457\u2013470."},{"key":"e_1_2_1_56_1","volume-title":"Analyzing LLM behavior in dialogue summarization: Unveiling circumstantial hallucination trends. arXiv preprint arXiv:2406.03487","author":"Ramprasad Sanjana","year":"2024","unstructured":"Sanjana Ramprasad, Elisa Ferracane, and Zachary C Lipton. 2024. Analyzing LLM behavior in dialogue summarization: Unveiling circumstantial hallucination trends. arXiv preprint arXiv:2406.03487 (2024)."},{"key":"e_1_2_1_57_1","volume-title":"A simplest systematics for the organization of turn-taking for conversation. language 50, 4","author":"Sacks Harvey","year":"1974","unstructured":"Harvey Sacks, Emanuel A Schegloff, and Gail Jefferson. 1974. A simplest systematics for the organization of turn-taking for conversation. language 50, 4 (1974), 696\u2013735."},{"key":"e_1_2_1_58_1","volume-title":"Pyroomacoustics. In 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 351\u2013355","author":"Scheibler Robin","year":"2018","unstructured":"Robin Scheibler, Eric Bezzam, and Ivan Dokmani\u0107. 2018. Pyroomacoustics. In 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 351\u2013355."},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/3230543.3230550"},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP43922.2022.9746384"},{"key":"e_1_2_1_61_1","unstructured":"Kazuki Shimada Archontis Politis Parthasaarathy Sudarsanam Daniel A Krause Kengo Uchida Sharath Adavanne Aapo Hakala Yuichiro Koyama Naoya Takahashi Shusuke Takahashi et al. 2024. STARSS23: An audio-visual dataset of spatial recordings of real scenes with spatiotemporal annotations of sound events. Advances in Neural Information Processing Systems 36 (2024)."},{"key":"e_1_2_1_62_1","volume-title":"Tolerable hearing aid delays. I. Estimation of limits imposed by the auditory path alone using simulated hearing losses. Ear and hearing 20, 3","author":"Stone Michael A","year":"1999","unstructured":"Michael A Stone and Brian CJ Moore. 1999. Tolerable hearing aid delays. I. Estimation of limits imposed by the auditory path alone using simulated hearing losses. Ear and hearing 20, 3 (1999), 182\u2013192."},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1145\/3447993.3448626"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1145\/3230543.3230580"},{"key":"e_1_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1145\/3586183.3606779"},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1145\/3613904.3642057"},{"key":"e_1_2_1_67_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3550293","article-title":"LoEar: Push the range limit of acoustic sensing for vital sign monitoring","volume":"6","author":"Wang Lei","year":"2022","unstructured":"Lei Wang, Wei Li, Ke Sun, Fusang Zhang, Tao Gu, Chenren Xu, and Daqing Zhang. 2022. LoEar: Push the range limit of acoustic sensing for vital sign monitoring. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1\u201324.","journal-title":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"},{"key":"e_1_2_1_68_1","volume-title":"DCASE2022 Challenge, Tech. Rep.","author":"Wang Qing","year":"2022","unstructured":"Qing Wang, Li Chai, Huaxin Wu, Zhaoxu Nian, Shutong Niu, Siyuan Zheng, Yuyang Wang, Lei Sun, Yi Fang, Jia Pan, et al. 2022. The nerc-slip system for sound event localization and detection of dcase2022 challenge. DCASE2022 Challenge, Tech. Rep. (2022)."},{"key":"e_1_2_1_69_1","volume-title":"Diarizationlm: Speaker diarization post-processing with large language models. arXiv preprint arXiv:2401.03506","author":"Wang Quan","year":"2024","unstructured":"Quan Wang, Yiling Huang, Guanlong Zhao, Evan Clark, Wei Xia, and Hank Liao. 2024. Diarizationlm: Speaker diarization post-processing with large language models. arXiv preprint arXiv:2401.03506 (2024)."},{"key":"e_1_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.1109\/TASLP.2019.2921892"},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2020.3032278"},{"key":"e_1_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP43922.2022.9746884"},{"key":"e_1_2_1_73_1","volume-title":"interpolation, and smoothing of stationary time series: with engineering applications","author":"Wiener Norbert","unstructured":"Norbert Wiener. 1949. Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications. The MIT press."},{"key":"e_1_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2022.3222821"},{"key":"e_1_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1145\/3625687.3625816"},{"key":"e_1_2_1_76_1","doi-asserted-by":"publisher","DOI":"10.1145\/3372224.3419213"},{"key":"e_1_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP48485.2024.10446749"},{"key":"e_1_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1145\/3712288"},{"key":"e_1_2_1_79_1","volume-title":"20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23)","author":"Zhang Yongzhao","year":"2023","unstructured":"Yongzhao Zhang, Yezhou Wang, Lanqing Yang, Mei Wang, Yi-Chao Chen, Lili Qiu, Yihong Liu, Guangtao Xue, and Jiadi Yu. 2023. Acoustic sensing and communication using metasurface. In 20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23). 1359\u20131374."},{"key":"e_1_2_1_80_1","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2023.3240008"}],"container-title":["Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3789685","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,16]],"date-time":"2026-03-16T17:51:48Z","timestamp":1773683508000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3789685"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,16]]},"references-count":80,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2026,3,16]]}},"alternative-id":["10.1145\/3789685"],"URL":"https:\/\/doi.org\/10.1145\/3789685","relation":{},"ISSN":["2474-9567"],"issn-type":[{"value":"2474-9567","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,3,16]]},"assertion":[{"value":"2026-03-16","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}