{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,31]],"date-time":"2026-03-31T21:33:59Z","timestamp":1774992839016,"version":"3.50.1"},"reference-count":66,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2025,9,3]],"date-time":"2025-09-03T00:00:00Z","timestamp":1756857600000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["2003279, 2112167, 1911095, 2112665, 2120019, 2211386"],"award-info":[{"award-number":["2003279, 2112167, 1911095, 2112665, 2120019, 2211386"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000028","name":"Semiconductor Research Corporation","doi-asserted-by":"publisher","award":["PRISM and CoCoSys, centers in JUMP 2.0"],"award-info":[{"award-number":["PRISM and CoCoSys, centers in JUMP 2.0"]}],"id":[{"id":"10.13039\/100000028","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."],"published-print":{"date-parts":[[2025,9,3]]},"abstract":"<jats:p>Natural language interaction with sensing systems is crucial for addressing users' personal concerns and providing health-related insights into their daily lives. When a user asks a question, the system automatically analyzes the full history of sensor data, extracts relevant information, and generates an appropriate response. However, existing systems are limited to short-duration (e.g., one minute) or low-frequency (e.g., daily step count) sensor data. In addition, they struggle with quantitative questions that require precise numerical answers.<\/jats:p>\n          <jats:p>In this work, we introduce SensorChat, the first end-to-end QA system designed for daily life monitoring using long-duration, high-frequency time series data. Given raw sensor signals spanning multiple days and a user-defined natural language question, SensorChat generates semantically meaningful responses that directly address users' concerns. SensorChat effectively handles both quantitative questions that require numerical precision and qualitative questions that require high-level reasoning to infer subjective insights. To achieve this, SensorChat uses an innovative three-stage pipeline that includes question decomposition, sensor data query, and answer assembly. The first and third stages leverage Large Language Models (LLMs) to interpret human queries and generate responses. The intermediate querying stage extracts relevant information from the complete sensor data history, which is then combined with the original query in the final stage to produce accurate and meaningful answers. Real-world implementations demonstrate SensorChat's capability for real-time interactions on a cloud server while also being able to run entirely on an edge platform after quantization. Comprehensive QA evaluations show that SensorChat achieves 93% higher answer accuracy than the best performing state-of-the-art systems on quantitative questions. Furthermore, a user study with eight volunteers highlights SensorChat's effectiveness in answering qualitative and open-ended questions. The code is available at https:\/\/github.com\/Orienfish\/SensorChat.<\/jats:p>","DOI":"10.1145\/3749496","type":"journal-article","created":{"date-parts":[[2025,9,3]],"date-time":"2025-09-03T17:15:45Z","timestamp":1756919745000},"page":"1-35","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["SensorChat: Answering Qualitative and Quantitative Questions during Long-term Multimodal Sensor Interactions"],"prefix":"10.1145","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9638-6184","authenticated-orcid":false,"given":"Xiaofan","family":"Yu","sequence":"first","affiliation":[{"name":"University of California San Diego, La Jolla, California, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0641-3677","authenticated-orcid":false,"given":"Lanxiang","family":"Hu","sequence":"additional","affiliation":[{"name":"University of California San Diego, La Jolla, California, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-3854-7930","authenticated-orcid":false,"given":"Benjamin","family":"Reichman","sequence":"additional","affiliation":[{"name":"Georgia Institute of Technology, Atlanta, Georgia, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-5511-3286","authenticated-orcid":false,"given":"Dylan","family":"Chu","sequence":"additional","affiliation":[{"name":"University of California San Diego, La Jolla, California, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-5447-8693","authenticated-orcid":false,"given":"Rushil","family":"Chandrupatla","sequence":"additional","affiliation":[{"name":"University of California San Diego, La Jolla, California, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8908-1307","authenticated-orcid":false,"given":"Xiyuan","family":"Zhang","sequence":"additional","affiliation":[{"name":"University of California San Diego, La Jolla, California, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3358-6362","authenticated-orcid":false,"given":"Larry","family":"Heck","sequence":"additional","affiliation":[{"name":"Georgia Institute of Technology, Atlanta, Georgia, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6954-997X","authenticated-orcid":false,"given":"Tajana S.","family":"Rosing","sequence":"additional","affiliation":[{"name":"University of California San Diego, La Jolla, California, USA"}]}],"member":"320","published-online":{"date-parts":[[2025,9,3]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"2025. DeepSeek. https:\/\/www.deepseek.com\/. [Online]."},{"key":"e_1_2_1_2_1","unstructured":"2025. Jetson Orin NX Module. https:\/\/developer.nvidia.com\/embedded\/jetson-modules#jetson_orin_nx. [Online]."},{"key":"e_1_2_1_3_1","unstructured":"2025. NVIDIA A100 Tensor Core GPU. https:\/\/www.nvidia.com\/en-us\/data-center\/a100\/. [Online]."},{"key":"e_1_2_1_4_1","unstructured":"2025. OpenAI o3-mini. https:\/\/openai.com\/index\/openai-o3-mini\/. [Online]."},{"key":"e_1_2_1_5_1","volume-title":"Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al.","author":"Achiam Josh","year":"2023","unstructured":"Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023)."},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3421937.3421983"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/1026653.1026656"},{"key":"e_1_2_1_8_1","unstructured":"Jean-Baptiste Alayrac Jeff Donahue Pauline Luc Antoine Miech Iain Barr Yana Hasson Karel Lenc Arthur Mensch Katherine Millican Malcolm Reynolds et al. 2022. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems 35 (2022) 23716--23736."},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3699759"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3666025.3699331"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3699747"},{"key":"e_1_2_1_12_1","volume-title":"Xing","author":"Chiang Wei-Lin","year":"2023","unstructured":"Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. https:\/\/lmsys.org\/blog\/2023-03-30-vicuna\/"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jclepro.2021.126908"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v39i15.33762"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.acl-long.65"},{"key":"e_1_2_1_16_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)."},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3659604"},{"key":"e_1_2_1_18_1","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies","volume":"1","author":"Eyal Matan","unstructured":"Matan Eyal, Tal Baumel, and Michael Elhadad. [n.d.]. Question Answering as an Automatic Evaluation Metric for News Article Summarization. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 3938--3948."},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCAD57390.2023.10323953"},{"key":"e_1_2_1_20_1","volume-title":"Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752","author":"Gu Albert","year":"2023","unstructured":"Albert Gu and Tri Dao. 2023. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752 (2023)."},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.02510"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1027\/\/1015-5759.16.3.150"},{"key":"e_1_2_1_23_1","volume-title":"Evaluating large language models as virtual annotators for time-series physical sensing data. arXiv preprint arXiv:2403.01133","author":"Hota Aritra","year":"2024","unstructured":"Aritra Hota, Soumyajit Chatterjee, and Sandip Chakraborty. 2024. Evaluating large language models as virtual annotators for time-series physical sensing data. arXiv preprint arXiv:2403.01133 (2024)."},{"key":"e_1_2_1_24_1","volume-title":"Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685","author":"Hu Edward J","year":"2021","unstructured":"Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021)."},{"key":"e_1_2_1_25_1","unstructured":"Lei Huang Weijiang Yu Weitao Ma Weihong Zhong Zhangyin Feng Haotian Wang Qianglong Chen Weihua Peng Xiaocheng Feng Bing Qin et al. 2023. A survey on hallucination in large language models: Principles taxonomy challenges and open questions. arXiv preprint arXiv:2311.05232 (2023)."},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1155\/2014\/740452"},{"key":"e_1_2_1_27_1","volume-title":"MindGuard: Towards Accessible and Sitgma-free Mental Health First Aid via Edge LLM. arXiv preprint arXiv:2409.10064","author":"Ji Sijie","year":"2024","unstructured":"Sijie Ji, Xinzhe Zheng, Jiawei Sun, Renqi Chen, Wei Gao, and Mani Srivastava. 2024. MindGuard: Towards Accessible and Sitgma-free Mental Health First Aid via Edge LLM. arXiv preprint arXiv:2409.10064 (2024)."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/FMSys62467.2024.00011"},{"key":"e_1_2_1_29_1","volume-title":"Supervised contrastive learning. Advances in neural information processing systems 33","author":"Khosla Prannay","year":"2020","unstructured":"Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. Advances in neural information processing systems 33 (2020), 18661--18673."},{"key":"e_1_2_1_30_1","volume-title":"Health-llm: Large language models for health prediction via wearable sensor data. arXiv preprint arXiv:2401.06866","author":"Kim Yubin","year":"2024","unstructured":"Yubin Kim, Xuhai Xu, Daniel McDuff, Cynthia Breazeal, and Hae Won Park. 2024. Health-llm: Large language models for health prediction via wearable sensor data. arXiv preprint arXiv:2401.06866 (2024)."},{"key":"e_1_2_1_31_1","volume-title":"Xiang Yue, and Wenhu Chen.","author":"Li Tianle","year":"2024","unstructured":"Tianle Li, Ge Zhang, Quy Duc Do, Xiang Yue, and Wenhu Chen. 2024. Long-context llms struggle with long in-context learning. arXiv preprint arXiv:2404.02060 (2024)."},{"key":"e_1_2_1_32_1","volume-title":"AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration. In MLSys.","author":"Lin Ji","year":"2024","unstructured":"Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei-Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. 2024. AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration. In MLSys."},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.02484"},{"key":"e_1_2_1_34_1","volume-title":"Paolo Di Achille, and Shwetak Patel","author":"Liu Xin","year":"2023","unstructured":"Xin Liu, Daniel McDuff, Geza Kovacs, Isaac Galatzer-Levy, Jacob Sunshine, Jiening Zhan, Ming-Zher Poh, Shun Liao, Paolo Di Achille, and Shwetak Patel. 2023. Large language models are few-shot health learners. arXiv preprint arXiv:2305.15525 (2023)."},{"key":"e_1_2_1_35_1","volume-title":"Mobilellm: Optimizing sub-billion parameter language models for on-device use cases. arXiv preprint arXiv:2402.14905","author":"Liu Zechun","year":"2024","unstructured":"Zechun Liu, Changsheng Zhao, Forrest Iandola, Chen Lai, Yuandong Tian, Igor Fedorov, Yunyang Xiong, Ernie Chang, Yangyang Shi, Raghuraman Krishnamoorthi, et al. 2024. Mobilellm: Optimizing sub-billion parameter language models for on-device use cases. arXiv preprint arXiv:2402.14905 (2024)."},{"key":"e_1_2_1_36_1","first-page":"2507","article-title":"Learn to explain: Multimodal reasoning via thought chains for science question answering","volume":"35","author":"Lu Pan","year":"2022","unstructured":"Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. 2022. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems 35 (2022), 2507--2521.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_37_1","volume-title":"Iot-lm: Large multisensory language models for the internet of things. arXiv preprint arXiv:2407.09801","author":"Mo Shentong","year":"2024","unstructured":"Shentong Mo, Russ Salakhutdinov, Louis-Philippe Morency, and Paul Pu Liang. 2024. Iot-lm: Large multisensory language models for the internet of things. arXiv preprint arXiv:2407.09801 (2024)."},{"key":"e_1_2_1_38_1","volume-title":"Anymal: An efficient and scalable any-modality augmented language model. arXiv preprint arXiv:2309.16058","author":"Moon Seungwhan","year":"2023","unstructured":"Seungwhan Moon, Andrea Madotto, Zhaojiang Lin, Tushar Nagarajan, Matt Smith, Shashank Jain, Chun-Fu Yeh, Prakash Murugesan, Peyman Heidari, Yue Liu, et al. 2023. Anymal: An efficient and scalable any-modality augmented language model. arXiv preprint arXiv:2309.16058 (2023)."},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.findings-emnlp.883"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3560905.3568074"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3495243.3560519"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1109\/SenSys-ML62579.2024.00007"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/3581791.3596844"},{"key":"e_1_2_1_44_1","volume-title":"Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32","author":"Adam Paszke","year":"2019","unstructured":"Adam Paszke et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019)."},{"key":"e_1_2_1_45_1","volume-title":"International conference on machine learning. PMLR, 8748--8763","author":"Radford Alec","year":"2021","unstructured":"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning. PMLR, 8748--8763."},{"key":"e_1_2_1_46_1","first-page":"1","article-title":"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer","volume":"21","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research 21, 140 (2020), 1--67. http:\/\/jmlr.org\/papers\/v21\/20-074.html","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/3715014.3722074"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.01438"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/2539150.2539229"},{"key":"e_1_2_1_50_1","volume-title":"Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971","author":"Touvron Hugo","year":"2023","unstructured":"Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023)."},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/MPRV.2017.3971131"},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3174128"},{"key":"e_1_2_1_53_1","volume-title":"Attention is all you need. Advances in Neural Information Processing Systems","author":"Vaswani A","year":"2017","unstructured":"A Vaswani. 2017. Attention is all you need. Advances in Neural Information Processing Systems (2017)."},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/3666025.3699349"},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450268.3453529"},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1145\/3638550.3641130"},{"key":"e_1_2_1_57_1","volume-title":"AutoLife: Automatic Life Journaling with Smartphones and LLMs. arXiv preprint arXiv:2412.15714","author":"Xu Huatao","year":"2024","unstructured":"Huatao Xu, Panrong Tong, Mo Li, and Mani Srivastava. 2024. AutoLife: Automatic Life Journaling with Smartphones and LLMs. arXiv preprint arXiv:2412.15714 (2024)."},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/3570361.3613299"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/3625687.3625782"},{"key":"e_1_2_1_60_1","volume-title":"DrHouse: An LLM-empowered Diagnostic Reasoning System through Harnessing Outcomes from Sensor Data and Expert Knowledge. arXiv preprint arXiv:2405.12541","author":"Yang Bufang","year":"2024","unstructured":"Bufang Yang, Siyang Jiang, Lilin Xu, Kaiwei Liu, Hai Li, Guoliang Xing, Hongkai Chen, Xiaofan Jiang, and Zhenyu Yan. 2024. DrHouse: An LLM-empowered Diagnostic Reasoning System through Harnessing Outcomes from Sensor Data and Expert Knowledge. arXiv preprint arXiv:2405.12541 (2024)."},{"key":"e_1_2_1_61_1","unstructured":"Junjie Ye Xuanting Chen Nuo Xu Can Zu Zekai Shao Shichun Liu Yuhan Cui Zeyang Zhou Chao Gong Yang Shen et al. 2023. A comprehensive capability analysis of gpt-3 and gpt-3.5 series models. arXiv preprint arXiv:2303.10420 (2023)."},{"key":"e_1_2_1_62_1","volume-title":"Taesik Gong, Kimin Lee, and Sung-Ju Lee.","author":"Yoon Hyungjun","year":"2024","unstructured":"Hyungjun Yoon, Biniyam Aschalew Tolera, Taesik Gong, Kimin Lee, and Sung-Ju Lee. 2024. By my eyes: Grounding multimodal large language models with sensor data via visual prompting. arXiv preprint arXiv:2407.10385 (2024)."},{"key":"e_1_2_1_63_1","volume-title":"Llama-adapter: Efficient fine-tuning of language models with zero-init attention. arXiv preprint arXiv:2303.16199","author":"Zhang Renrui","year":"2023","unstructured":"Renrui Zhang, Jiaming Han, Chris Liu, Peng Gao, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, and Yu Qiao. 2023. Llama-adapter: Efficient fine-tuning of language models with zero-init attention. arXiv preprint arXiv:2303.16199 (2023)."},{"key":"e_1_2_1_64_1","volume-title":"Retrieval-augmented generation for ai-generated content: A survey. arXiv preprint arXiv:2402.19473","author":"Zhao Penghao","year":"2024","unstructured":"Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, and Bin Cui. 2024. Retrieval-augmented generation for ai-generated content: A survey. arXiv preprint arXiv:2402.19473 (2024)."},{"key":"e_1_2_1_65_1","volume-title":"LLM-Enhanced Data Management. arXiv preprint arXiv:2402.02643","author":"Zhou Xuanhe","year":"2024","unstructured":"Xuanhe Zhou, Xinyang Zhao, and Guoliang Li. 2024. LLM-Enhanced Data Management. arXiv preprint arXiv:2402.02643 (2024)."},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1145\/3666025.3699355"}],"container-title":["Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3749496","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3749496","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,25]],"date-time":"2025-09-25T16:31:07Z","timestamp":1758817867000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3749496"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,9,3]]},"references-count":66,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,9,3]]}},"alternative-id":["10.1145\/3749496"],"URL":"https:\/\/doi.org\/10.1145\/3749496","relation":{},"ISSN":["2474-9567"],"issn-type":[{"value":"2474-9567","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,9,3]]},"assertion":[{"value":"2025-09-03","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}