{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,10]],"date-time":"2025-12-10T09:11:18Z","timestamp":1765357878970,"version":"build-2065373602"},"publisher-location":"New York, NY, USA","reference-count":52,"publisher":"ACM","funder":[{"name":"National Key R&D Program of China","award":["2022YFF0604500"],"award-info":[{"award-number":["2022YFF0604500"]}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62272261"],"award-info":[{"award-number":["62272261"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Wuxi Research Institute of Applied Technologies, Tsinghua University","award":["20242001120"],"award-info":[{"award-number":["20242001120"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,6,23]]},"DOI":"10.1145\/3711875.3729134","type":"proceedings-article","created":{"date-parts":[[2025,10,2]],"date-time":"2025-10-02T19:30:22Z","timestamp":1759433422000},"page":"223-235","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["AutoDroid-V2: Boosting SLM-based GUI Agents via Code Generation"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0008-8450-7795","authenticated-orcid":false,"given":"Hao","family":"Wen","sequence":"first","affiliation":[{"name":"Institute for AI Industry Research (AIR), Tsinghua University, Beijing, -Select-, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-3133-5332","authenticated-orcid":false,"given":"Shizuo","family":"Tian","sequence":"additional","affiliation":[{"name":"Institute for AI Industry Research (AIR), Tsinghua University, Beijing, -Select-, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-4768-408X","authenticated-orcid":false,"given":"Borislav","family":"Pavlov","sequence":"additional","affiliation":[{"name":"Institute for AI Industry Research (AIR), Tsinghua University, Beijing, -Select-, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-4359-9217","authenticated-orcid":false,"given":"Wenjie","family":"Du","sequence":"additional","affiliation":[{"name":"Institute for AI Industry Research (AIR), Tsinghua University, Beijing, -Select-, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-3351-8972","authenticated-orcid":false,"given":"Yixuan","family":"Li","sequence":"additional","affiliation":[{"name":"Institute for AI Industry Research (AIR), Tsinghua University, Beijing, -Select-, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-6306-4079","authenticated-orcid":false,"given":"Ge","family":"Chang","sequence":"additional","affiliation":[{"name":"Institute for AI Industry Research (AIR), Tsinghua University, Beijing, -Select-, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-4205-0770","authenticated-orcid":false,"given":"Shanhui","family":"Zhao","sequence":"additional","affiliation":[{"name":"Institute for AI Industry Research (AIR), Tsinghua University, Beijing, -Select-, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-3759-6247","authenticated-orcid":false,"given":"Jiacheng","family":"Liu","sequence":"additional","affiliation":[{"name":"Institute for AI Industry Research (AIR), Tsinghua University, Beijing, -Select-, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7352-8955","authenticated-orcid":false,"given":"Yunxin","family":"Liu","sequence":"additional","affiliation":[{"name":"Institute for AI Industry Research (AIR), Tsinghua University, Beijing, -Select-, China"},{"name":"Shanghai Artificial Intelligence Laboratory, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4515-6212","authenticated-orcid":false,"given":"Ya-Qin","family":"Zhang","sequence":"additional","affiliation":[{"name":"Institute for AI Industry Research (AIR), Tsinghua University, Beijing, -Select-, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1591-2526","authenticated-orcid":false,"given":"Yuanchun","family":"Li","sequence":"additional","affiliation":[{"name":"Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China"},{"name":"Shanghai Artificial Intelligence Laboratory, Beijing, China"},{"name":"Beijing Academy of Artificial Intelligence (BAAI), Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2025,9,25]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/2906388.2906416"},{"key":"e_1_3_2_1_2_1","volume-title":"Digirl: Training in-the-wild device-control agents with autonomous reinforcement learning. arXiv preprint arXiv:2406.11896","author":"Bai Hao","year":"2024","unstructured":"Hao Bai, Yifei Zhou, Mert Cemri, Jiayi Pan, Alane Suhr, Sergey Levine, and Aviral Kumar. 2024. Digirl: Training in-the-wild device-control agents with autonomous reinforcement learning. arXiv preprint arXiv:2406.11896 (2024)."},{"key":"e_1_3_2_1_3_1","volume-title":"Localization, Text Reading, and Beyond. arXiv preprint arXiv:2308.12966","author":"Bai Jinze","year":"2023","unstructured":"Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. 2023. Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond. arXiv preprint arXiv:2308.12966 (2023)."},{"key":"e_1_3_2_1_4_1","volume-title":"Latent State Estimation Helps UI Agents to Reason. arXiv preprint arXiv:2405.11120","author":"Bishop William E","year":"2024","unstructured":"William E Bishop, Alice Li, Christopher Rawles, and Oriana Riva. 2024. Latent State Estimation Helps UI Agents to Reason. arXiv preprint arXiv:2405.11120 (2024)."},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/800250.807503"},{"key":"e_1_3_2_1_6_1","volume-title":"European Conference on Computer Vision (ECCV).","author":"Burns Andrea","unstructured":"Andrea Burns, Deniz Arsan, Sanjna Agrawal, Ranjitha Kumar, Kate Saenko, and Bryan A. Plummer. 2022. A Dataset for Interactive Vision Language Navigation with Unknown Command Feasibility. In European Conference on Computer Vision (ECCV)."},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"crossref","unstructured":"Kanzhi Cheng Qiushi Sun Yougang Chu Fangzhi Xu Yantao Li Jianbing Zhang and Zhiyong Wu. 2024. SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents. arXiv:cs.HC\/2401.10935","DOI":"10.18653\/v1\/2024.acl-long.505"},{"key":"e_1_3_2_1_8_1","volume-title":"Proceedings of the 37th International Conference on Neural Information Processing Systems (NIPS '23)","author":"Deng Xiang","year":"2024","unstructured":"Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. 2024. MIND2WEB: towards a generalist agent for the web. In Proceedings of the 37th International Conference on Neural Information Processing Systems (NIPS '23). Curran Associates Inc., Red Hook, NY, USA, Article 1220, 24 pages."},{"key":"e_1_3_2_1_9_1","unstructured":"Abhimanyu Dubey Abhinav Jauhri Abhinav Pandey Abhishek Kadian Ahmad Al-Dahle Aiesha Letman Akhil Mathur Alan Schelten Amy Yang Angela Fan et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 (2024)."},{"key":"e_1_3_2_1_10_1","unstructured":"Georgi Gerganov et al. 2023. llama.cpp. https:\/\/github.com\/ggerganov\/llama.cpp"},{"key":"e_1_3_2_1_11_1","unstructured":"Google. 2023. Create your own accessibility service. https:\/\/developer.android.com\/guide\/topics\/ui\/accessibility\/service. Accessed: 2023-11-11."},{"key":"e_1_3_2_1_12_1","volume-title":"Navigating the digital world as humans do: Universal visual grounding for gui agents. arXiv preprint arXiv:2410.05243","author":"Gou Boyu","year":"2024","unstructured":"Boyu Gou, Ruohan Wang, Boyuan Zheng, Yanan Xie, Cheng Chang, Yiheng Shu, Huan Sun, and Yu Su. 2024. Navigating the digital world as humans do: Universal visual grounding for gui agents. arXiv preprint arXiv:2410.05243 (2024)."},{"key":"e_1_3_2_1_13_1","volume-title":"Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, et al.","author":"Gunasekar Suriya","year":"2023","unstructured":"Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio C\u00e9sar Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, et al. 2023. Textbooks are all you need. arXiv preprint arXiv:2306.11644 (2023)."},{"key":"e_1_3_2_1_14_1","volume-title":"The Twelfth International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=9JQtrumvg8","author":"Gur Izzeddin","year":"2024","unstructured":"Izzeddin Gur, Hiroki Furuta, Austin V Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, and Aleksandra Faust. 2024. A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis. In The Twelfth International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=9JQtrumvg8"},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.01354"},{"key":"e_1_3_2_1_16_1","unstructured":"Aaron Hurst Adam Lerer Adam P Goucher Adam Perelman Aditya Ramesh Aidan Clark AJ Ostrow Akila Welihinda Alan Hayes Alec Radford et al. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276 (2024)."},{"key":"e_1_3_2_1_17_1","volume-title":"Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al.","author":"Jiang Albert Q","year":"2024","unstructured":"Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. 2024. Mixtral of experts. arXiv preprint arXiv:2401.04088 (2024)."},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.acl-long.91"},{"key":"e_1_3_2_1_19_1","volume-title":"Scaling laws for neural language models. arXiv preprint arXiv:2001.08361","author":"Kaplan Jared","year":"2020","unstructured":"Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 (2020)."},{"key":"e_1_3_2_1_20_1","volume-title":"select, derive, and recall: Augmenting llm with human-like memory for mobile task automation. arXiv preprint arXiv:2312.03003 3, 7","author":"Lee Sunjae","year":"2023","unstructured":"Sunjae Lee, Junyoung Choi, Jungjae Lee, Hojun Choi, Steven Y Ko, Sangeun Oh, and Insik Shin. 2023. Explore, select, derive, and recall: Augmenting llm with human-like memory for mobile task automation. arXiv preprint arXiv:2312.03003 3, 7 (2023), 8."},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3636534.3690682"},{"key":"e_1_3_2_1_22_1","unstructured":"Yuanchun Li Hao Wen Weijun Wang Xiangyu Li Yizhen Yuan Guohong Liu Jiacheng Liu Wenxing Xu Xiang Wang Yi Sun et al. 2024. Personal llm agents: Insights and survey about the capability efficiency and security. arXiv preprint arXiv:2401.05459 (2024)."},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE-C.2017.8"},{"key":"e_1_3_2_1_24_1","volume-title":"Jiadai Sun, Jiaqi Wang, et al.","author":"Liu Xiao","year":"2024","unstructured":"Xiao Liu, Bo Qin, Dongzhu Liang, Guang Dong, Hanyu Lai, Hanchen Zhang, Hanlin Zhao, Iat Long Iong, Jiadai Sun, Jiaqi Wang, et al. 2024. AutoGLM: Autonomous Foundation Agents for GUIs. arXiv preprint arXiv:2411.00820 (2024)."},{"key":"e_1_3_2_1_25_1","volume-title":"Small language models: Survey, measurements, and insights. arXiv preprint arXiv:2409.15790","author":"Lu Zhenyan","year":"2024","unstructured":"Zhenyan Lu, Xiang Li, Dongqi Cai, Rongjie Yi, Fangming Liu, Xiwen Zhang, Nicholas D Lane, and Mengwei Xu. 2024. Small language models: Survey, measurements, and insights. arXiv preprint arXiv:2409.15790 (2024)."},{"key":"e_1_3_2_1_26_1","volume-title":"Caution for the Environment: Multimodal Agents are Susceptible to Environmental Distractions. arXiv preprint arXiv:2408.02544","author":"Ma Xinbei","year":"2024","unstructured":"Xinbei Ma, Yiting Wang, Yao Yao, Tongxin Yuan, Aston Zhang, Zhuosheng Zhang, and Hai Zhao. 2024. Caution for the Environment: Multimodal Agents are Susceptible to Environmental Distractions. arXiv preprint arXiv:2408.02544 (2024)."},{"key":"e_1_3_2_1_27_1","unstructured":"Christopher Rawles Sarah Clinckemaillie Yifan Chang Jonathan Waltz Gabrielle Lau Marybeth Fair Alice Li William Bishop Wei Li Folawiyo Campbell-Ajala et al. 2024. AndroidWorld: A dynamic benchmarking environment for autonomous agents. arXiv preprint arXiv:2405.14573 (2024)."},{"key":"e_1_3_2_1_28_1","unstructured":"Christopher Rawles Alice Li Daniel Rodriguez Oriana Riva and Timothy Lillicrap. 2023. Android in the Wild: A Large-Scale Dataset for Android Device Control. arXiv:cs.LG\/2307.10088"},{"key":"e_1_3_2_1_29_1","volume-title":"Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, et al.","author":"Roziere Baptiste","year":"2023","unstructured":"Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, et al. 2023. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950 (2023)."},{"key":"e_1_3_2_1_30_1","volume-title":"META-GUI: Towards Multi-modal Conversational Agents on Mobile GUI. arXiv preprint arXiv:2205.11029","author":"Sun Liangtai","year":"2022","unstructured":"Liangtai Sun, Xingyu Chen, Lu Chen, Tianle Dai, Zichen Zhu, and Kai Yu. 2022. META-GUI: Towards Multi-modal Conversational Agents on Mobile GUI. arXiv preprint arXiv:2205.11029 (2022)."},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.findings-naacl.234"},{"key":"e_1_3_2_1_32_1","unstructured":"Chien Van Nguyen Xuan Shen Ryan Aponte Yu Xia Samyadeep Basu Zhengmian Hu Jian Chen Mihir Parmar Sasidhar Kunapuli Joe Barrow et al. 2024. A Survey of Small Language Models. arXiv preprint arXiv:2410.20011 (2024)."},{"key":"e_1_3_2_1_33_1","volume-title":"Mobile-Agent-v2: Mobile Device Operation Assistant with Effective Navigation via Multi-Agent Collaboration. arXiv preprint arXiv:2406.01014","author":"Wang Junyang","year":"2024","unstructured":"Junyang Wang, Haiyang Xu, Haitao Jia, Xi Zhang, Ming Yan, Weizhou Shen, Ji Zhang, Fei Huang, and Jitao Sang. 2024. Mobile-Agent-v2: Mobile Device Operation Assistant with Effective Navigation via Multi-Agent Collaboration. arXiv preprint arXiv:2406.01014 (2024)."},{"key":"e_1_3_2_1_34_1","volume-title":"Mobile-agent: Autonomous multi-modal mobile device agent with visual perception. arXiv preprint arXiv:2401.16158","author":"Wang Junyang","year":"2024","unstructured":"Junyang Wang, Haiyang Xu, Jiabo Ye, Ming Yan, Weizhou Shen, Ji Zhang, Fei Huang, and Jitao Sang. 2024. Mobile-agent: Autonomous multi-modal mobile device agent with visual perception. arXiv preprint arXiv:2401.16158 (2024)."},{"key":"e_1_3_2_1_35_1","unstructured":"Xingyao Wang Yangyi Chen Lifan Yuan Yizhe Zhang Yunzhu Li Hao Peng and Heng Ji. 2024. Executable Code Actions Elicit Better LLM Agents. In ICML. arXiv:2402.01030"},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3636534.3649379"},{"key":"e_1_3_2_1_37_1","volume-title":"Droidbot-gpt: Gpt-powered ui automation for android. arXiv preprint arXiv:2304.07061","author":"Wen Hao","year":"2023","unstructured":"Hao Wen, Hongming Wang, Jiaxuan Liu, and Yuanchun Li. 2023. Droidbot-gpt: Gpt-powered ui automation for android. arXiv preprint arXiv:2304.07061 (2023)."},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3637528.3671650"},{"key":"e_1_3_2_1_39_1","volume-title":"On-device language models: A comprehensive review. arXiv preprint arXiv:2409.00088","author":"Xu Jiajun","year":"2024","unstructured":"Jiajun Xu, Zhiyuan Li, Wei Chen, Qun Wang, Xin Gao, Qi Cai, and Ziyuan Ling. 2024. On-device language models: A comprehensive review. arXiv preprint arXiv:2409.00088 (2024)."},{"key":"e_1_3_2_1_40_1","volume-title":"Wonderland: Large Multimodal Models for Zero-Shot Smartphone GUI Navigation. arXiv preprint arXiv:2311.07562","author":"Yan An","year":"2023","unstructured":"An Yan, Zhengyuan Yang, Wanrong Zhu, Kevin Lin, Linjie Li, Jianfeng Wang, Jianwei Yang, Yiwu Zhong, Julian McAuley, Jianfeng Gao, et al. 2023. GPT-4V in Wonderland: Large Multimodal Models for Zero-Shot Smartphone GUI Navigation. arXiv preprint arXiv:2311.07562 (2023)."},{"key":"e_1_3_2_1_41_1","volume-title":"Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs. arXiv preprint arXiv:2404.05719","author":"You Keen","year":"2024","unstructured":"Keen You, Haotian Zhang, Eldon Schoop, Floris Weers, Amanda Swearngin, Jeffrey Nichols, Yinfei Yang, and Zhe Gan. 2024. Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs. arXiv preprint arXiv:2404.05719 (2024)."},{"key":"e_1_3_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/3636534.3649361"},{"key":"e_1_3_2_1_43_1","unstructured":"Chaoyun Zhang Shilin He Jiaxu Qian Bowen Li Liqun Li Si Qin Yu Kang Minghua Ma Qingwei Lin Saravan Rajmohan Dongmei Zhang and Qi Zhang. 2024. Large Language Model-Brained GUI Agents: A Survey. arXiv:cs.AI\/2411.18279 https:\/\/arxiv.org\/abs\/2411.18279"},{"key":"e_1_3_2_1_44_1","unstructured":"Chaoyun Zhang Shilin He Jiaxu Qian Bowen Li Liqun Li Si Qin Yu Kang Minghua Ma Guyue Liu Qingwei Lin et al. 2024. Large language model-brained gui agents: A survey. arXiv preprint arXiv:2411.18279 (2024)."},{"key":"e_1_3_2_1_45_1","unstructured":"Chi Zhang Zhao Yang Jiaxuan Liu Yucheng Han Xin Chen Zebiao Huang Bin Fu and Gang Yu. 2023. AppAgent: Multimodal Agents as Smartphone Users. arXiv:cs.CV\/2312.13771"},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/3654777.3676382"},{"key":"e_1_3_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2024.3372777"},{"key":"e_1_3_2_1_48_1","volume-title":"Attacking Vision-Language Computer Agents via Pop-ups. arXiv preprint arXiv:2411.02391","author":"Zhang Yanzhe","year":"2024","unstructured":"Yanzhe Zhang, Tao Yu, and Diyi Yang. 2024. Attacking Vision-Language Computer Agents via Pop-ups. arXiv preprint arXiv:2411.02391 (2024)."},{"key":"e_1_3_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.findings-acl.186"},{"key":"e_1_3_2_1_50_1","unstructured":"Zhizheng Zhang Xiaoyi Zhang Wenxuan Xie and Yan Lu. 2023. Responsible Task Automation: Empowering Large Language Models as Responsible Task Automators. arXiv:cs.AI\/2306.01242"},{"key":"e_1_3_2_1_51_1","volume-title":"Levine (Eds.)","volume":"36","author":"Zheng Lianmin","year":"2023","unstructured":"Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. In Advances in Neural Information Processing Systems, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds.), Vol. 36. Curran Associates, Inc., 46595\u201346623. https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2023\/file\/91f18a1287b398d378ef22505bf41832-Paper-Datasets_and_Benchmarks.pdf"},{"key":"e_1_3_2_1_52_1","unstructured":"Qihao Zhu Daya Guo Zhihong Shao Dejian Yang Peiyi Wang Runxin Xu Y Wu Yukun Li Huazuo Gao Shirong Ma et al. 2024. DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence. arXiv preprint arXiv:2406.11931 (2024)."}],"event":{"name":"MobiSys '25: 23rd Annual International Conference on Mobile Systems, Applications and Services","location":"Hilton Anaheim Anaheim CA USA","acronym":"MobiSys '25","sponsor":["SIGMOBILE ACM Special Interest Group on Mobility of Systems, Users, Data and Computing","SIGOPS ACM Special Interest Group on Operating Systems"]},"container-title":["Proceedings of the 23rd Annual International Conference on Mobile Systems, Applications and Services"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3711875.3729134","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,2]],"date-time":"2025-10-02T19:31:34Z","timestamp":1759433494000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3711875.3729134"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,23]]},"references-count":52,"alternative-id":["10.1145\/3711875.3729134","10.1145\/3711875"],"URL":"https:\/\/doi.org\/10.1145\/3711875.3729134","relation":{},"subject":[],"published":{"date-parts":[[2025,6,23]]},"assertion":[{"value":"2025-09-25","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}