{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,26]],"date-time":"2025-09-26T00:20:51Z","timestamp":1758846051599,"version":"3.44.0"},"reference-count":73,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2024,11,21]],"date-time":"2024-11-21T00:00:00Z","timestamp":1732147200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100006374","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["No.62222202, No.62232004, No.U23A20272"],"award-info":[{"award-number":["No.62222202, No.62232004, No.U23A20272"]}],"id":[{"id":"10.13039\/501100006374","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Beijing Natural Science Foundation","award":["No. L223002"],"award-info":[{"award-number":["No. L223002"]}]},{"DOI":"10.13039\/501100013314","name":"111 Project","doi-asserted-by":"crossref","award":["No. B18008"],"award-info":[{"award-number":["No. B18008"]}],"id":[{"id":"10.13039\/501100013314","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."],"published-print":{"date-parts":[[2024,11,21]]},"abstract":"<jats:p>Millimeter-wave radar shows great sensing capabilities for pervasive and privacy-preserving gesture recognition. However, the lack of large-scale, dynamic radar datasets hinders the advancement of deep learning models for generalized gesture recognition in dynamic scenes. To address this problem, we opt for designing a system that employs wealthy dynamic 2D videos to generate realistic radar data, but it confronts two challenges including i) simulating the complex signal reflection characteristics of humans and the background and ii) extracting elusive gesture-relevant features from dynamic radar data. To this end, we design Uranus with two key components: (i) a dynamic data generation network (DDG-Net) combines several key modules, human reflection model, background reflection extractor, and data fitting model to simulate the signal reflection characteristics of humans and the background, followed by fitting the number and global distribution of points in point clouds to generate realistic radar data; (ii) a dynamic gesture recognition network (DGR-Net) combines two modules, spatial feature extraction and global feature fusion, to extract spatial and global features of points in point clouds, respectively, to achieve generalized gesture recognition. We implement and evaluate Uranus with dynamic video data from public video sources and self-collected radar data, demonstrating that Uranus outperforms the state-of-the-art approaches for gesture recognition in dynamic scenes.<\/jats:p>","DOI":"10.1145\/3699754","type":"journal-article","created":{"date-parts":[[2024,11,21]],"date-time":"2024-11-21T12:23:32Z","timestamp":1732191812000},"page":"1-28","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["Uranus: Empowering Generalized Gesture Recognition with Mobility through Generating Large-scale mmWave Radar Data"],"prefix":"10.1145","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0009-0006-8656-5134","authenticated-orcid":false,"given":"Yue","family":"Ling","sequence":"first","affiliation":[{"name":"State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7337-9168","authenticated-orcid":false,"given":"Dong","family":"Zhao","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1123-6978","authenticated-orcid":false,"given":"Kaikai","family":"Deng","sequence":"additional","affiliation":[{"name":"School of Information Engineering, Henan University of Science and Technology, Henan, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-4531-0609","authenticated-orcid":false,"given":"Kangwen","family":"Yin","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8492-1699","authenticated-orcid":false,"given":"Wenxin","family":"Zheng","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7199-5047","authenticated-orcid":false,"given":"Huadong","family":"Ma","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2024,11,21]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2018.2879075"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445138"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1007\/s12369-020-00626-z"},{"key":"e_1_2_1_4_1","first-page":"1","article-title":"Gesture recognition method using acoustic sensing on usual garment","volume":"6","author":"Amesaka Takashi","year":"2022","unstructured":"Takashi Amesaka, Hiroki Watanabe, Masanori Sugimoto, and Buntarou Shizuki. 2022. Gesture recognition method using acoustic sensing on usual garment. In Proc. of ACM IMWUT, Vol. 6. 1--27.","journal-title":"Proc. of ACM IMWUT"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1007\/s12369-013-0193-z"},{"key":"e_1_2_1_6_1","volume-title":"Nadia Magnenat Thalmann, and Daniel Thalmann","author":"Boulic Ronan","year":"1990","unstructured":"Ronan Boulic, Nadia Magnenat Thalmann, and Daniel Thalmann. 1990. A global human walking model with real-time kinematic personification. The visual computer 6 (1990), 344--358."},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3636534.3649350"},{"key":"e_1_2_1_8_1","first-page":"1","article-title":"Sensecollect: We need efficient ways to collect on-body sensor-based human activity data!","volume":"5","author":"Chen Wenqiang","year":"2021","unstructured":"Wenqiang Chen, Shupei Lin, Elizabeth Thompson, and John Stankovic. 2021. Sensecollect: We need efficient ways to collect on-body sensor-based human activity data!. In Proc. of ACM IMWUT, Vol. 5. 1--27.","journal-title":"Proc. of ACM IMWUT"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3625687.3625798"},{"key":"e_1_2_1_10_1","first-page":"1","article-title":"Geryon: Edge Assisted Real-time and Robust Object Detection on Drones via mmWave Radar and Camera Fusion","volume":"6","author":"Deng Kaikai","year":"2022","unstructured":"Kaikai Deng, Dong Zhao, Qiaoyue Han, Shuyue Wang, Zihan Zhang, Anfu Zhou, and Huadong Ma. 2022. Geryon: Edge Assisted Real-time and Robust Object Detection on Drones via mmWave Radar and Camera Fusion. In Proc. of ACM IMWUT, Vol. 6. 1--27.","journal-title":"Proc. of ACM IMWUT"},{"key":"e_1_2_1_11_1","first-page":"1","article-title":"Midas: Generating mmWave Radar Data from Videos for Training Pervasive and Privacy-preserving Human Sensing Tasks","volume":"7","author":"Deng Kaikai","year":"2023","unstructured":"Kaikai Deng, Dong Zhao, Qiaoyue Han, Zihan Zhang, Shuyue Wang, Anfu Zhou, and Huadong Ma. 2023. Midas: Generating mmWave Radar Data from Videos for Training Pervasive and Privacy-preserving Human Sensing Tasks. In Proc. of ACM IMWUT, Vol. 7. 1--26.","journal-title":"Proc. of ACM IMWUT"},{"key":"e_1_2_1_12_1","volume-title":"Generating Training Data of mmWave Radars from Videos for Privacy-Preserving Human Sensing with Mobility","author":"Deng Kaikai","year":"2023","unstructured":"Kaikai Deng, Dong Zhao, Zihan Zhang, Shuyue Wang, Wenxin Zheng, and Huadong Ma. 2023. Midas++: Generating Training Data of mmWave Radars from Videos for Privacy-Preserving Human Sensing with Mobility. IEEE Transactions on Mobile Computing (2023)."},{"key":"e_1_2_1_13_1","volume-title":"G3R: Generating Rich and Fine-grained mmWave Radar Data from 2D Videos for Generalized Gesture Recognition. arXiv preprint arXiv:2404.14934","author":"Deng Kaikai","year":"2024","unstructured":"Kaikai Deng, Dong Zhao, Wenxin Zheng, Yue Ling, Kangwen Yin, and Huadong Ma. 2024. G3R: Generating Rich and Fine-grained mmWave Radar Data from 2D Videos for Generalized Gesture Recognition. arXiv preprint arXiv:2404.14934 (2024)."},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/RADAR.2019.8835589"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICPR.2016.7899609"},{"key":"e_1_2_1_16_1","first-page":"1","article-title":"Wi-Learner: Towards one-shot learning for cross-domain Wi-Fi based gesture recognition","volume":"6","author":"Feng Chao","year":"2022","unstructured":"Chao Feng, Nan Wang, Yicheng Jiang, Xia Zheng, Kang Li, Zheng Wang, and Xiaojiang Chen. 2022. Wi-Learner: Towards one-shot learning for cross-domain Wi-Fi based gesture recognition. In Proc. of ACM IMWUT, Vol. 6. 1--27.","journal-title":"Proc. of ACM IMWUT"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCE.2018.2867801"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/WACV56688.2023.00505"},{"key":"e_1_2_1_19_1","volume-title":"Millimeter Wave Radar-based Human Activity Recognition for Healthcare Monitoring Robot. arXiv preprint arXiv:2405.01882","author":"Gu Zhanzhong","year":"2024","unstructured":"Zhanzhong Gu, Xiangjian He, Gengfa Fang, Chengpei Xu, Feng Xia, and Wenjing Jia. 2024. Millimeter Wave Radar-based Human Activity Recognition for Healthcare Monitoring Robot. arXiv preprint arXiv:2405.01882 (2024)."},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.322"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP49357.2023.10096198"},{"key":"e_1_2_1_23_1","unstructured":"Texas Instruments. 2019. TI IWR1443 single-chip 76-GHz to 81-GHz mmWave sensor evaluation module. Retrieved 2019 from https:\/\/www.ti.com\/tool\/IWR1443BOOST"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/CNS.2019.8802686"},{"key":"e_1_2_1_25_1","volume-title":"Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980","author":"Kingma Diederik P","year":"2014","unstructured":"Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)."},{"key":"e_1_2_1_26_1","volume-title":"Proc. of NIPS","volume":"25","author":"Krizhevsky Alex","year":"2012","unstructured":"Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Proc. of NIPS, Vol. 25."},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2011.6126543"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.3390\/biomimetics8080609"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/3560905.3568528"},{"key":"e_1_2_1_30_1","volume-title":"Simon See, Xiaogang Wang, Hongwei Qin, and Hongsheng Li.","author":"Li Dasong","year":"2023","unstructured":"Dasong Li, Xiaoyu Shi, Yi Zhang, Ka Chun Cheung, Simon See, Xiaogang Wang, Hongwei Qin, and Hongsheng Li. 2023. A Simple Baseline for Video Restoration With Grouped Spatial-Temporal Shift. In Proc. of IEEE CVPR. 9822--9832."},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1049\/joe.2019.0557"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.00836"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/3380688.3380711"},{"key":"e_1_2_1_34_1","first-page":"1","article-title":"MTransSee: Enabling environment-independent mmWave sensing based gesture recognition via transfer learning","volume":"6","author":"Liu Haipeng","year":"2022","unstructured":"Haipeng Liu, Kening Cui, Kaiyuan Hu, Yuheng Wang, Anfu Zhou, Liang Liu, and Huadong Ma. 2022. MTransSee: Enabling environment-independent mmWave sensing based gesture recognition via transfer learning. In Proc. of ACM IMWUT, Vol. 6. 1--28.","journal-title":"Proc. of ACM IMWUT"},{"key":"e_1_2_1_35_1","first-page":"1","article-title":"Real-time arm gesture recognition in smart home scenarios via millimeter wave sensing","volume":"4","author":"Liu Haipeng","year":"2020","unstructured":"Haipeng Liu, Yuheng Wang, Anfu Zhou, Hanyue He, Wei Wang, Kunpeng Wang, Peilin Pan, Yixuan Lu, Liang Liu, and Huadong Ma. 2020. Real-time arm gesture recognition in smart home scenarios via millimeter wave sensing. In Proc. of ACM IMWUT, Vol. 4. 1--28.","journal-title":"Proc. of ACM IMWUT"},{"key":"e_1_2_1_36_1","first-page":"1","article-title":"Towards a Dynamic Fresnel Zone Model to WiFi-based Human Activity Recognition","volume":"7","author":"Liu Jinyi","year":"2023","unstructured":"Jinyi Liu, Wenwei Li, Tao Gu, Ruiyang Gao, Bin Chen, Fusang Zhang, Dan Wu, and Daqing Zhang. 2023. Towards a Dynamic Fresnel Zone Model to WiFi-based Human Activity Recognition. In Proc. of ACM IMWUT, Vol. 7. 1--24.","journal-title":"Proc. of ACM IMWUT"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2022.3217487"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCIDS.2019.8862125"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/1057432.1057436"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i01.5430"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3339825.3394937"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01538"},{"key":"e_1_2_1_43_1","volume-title":"Yee Wei Law, and Javaan Chahl","author":"Perera Asanka G","year":"2018","unstructured":"Asanka G Perera, Yee Wei Law, and Javaan Chahl. 2018. UAV-GESTURE: A dataset for UAV control and gesture recognition. In Proc. of Springer ECCV. 117--128."},{"key":"e_1_2_1_44_1","volume-title":"Healthcare robots enabled with IoT and artificial intelligence for elderly patients. AI and IoT-Based Intelligent Automation in Robotics","author":"Porkodi S","year":"2021","unstructured":"S Porkodi and D Kesavaraja. 2021. Healthcare robots enabled with IoT and artificial intelligence for elderly patients. AI and IoT-Based Intelligent Automation in Robotics (2021), 87--108."},{"key":"e_1_2_1_45_1","volume-title":"Proc. of NIPS","volume":"30","author":"Qi Charles Ruizhongtai","year":"2017","unstructured":"Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. 2017. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Proc. of NIPS, Vol. 30."},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1109\/RadarConf2351548.2023.10149770"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/RadarConf2147009.2021.9455194"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/TAES.2022.3221023"},{"key":"e_1_2_1_49_1","first-page":"1","article-title":"Squigglemilli: Approximating sar imaging on mobile millimeter-wave devices","volume":"5","author":"Regmi Hem","year":"2021","unstructured":"Hem Regmi, Moh Sabbir Saadat, Sanjib Sur, and Srihari Nelakuditi. 2021. Squigglemilli: Approximating sar imaging on mobile millimeter-wave devices. In Proc. of ACM IMWUT, Vol. 5. 1--26.","journal-title":"Proc. of ACM IMWUT"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2022.3153717"},{"key":"e_1_2_1_52_1","first-page":"1","article-title":"mmasl: Environment-independent asl gesture recognition using 60 ghz millimeter-wave signals","volume":"4","author":"Santhalingam Panneer Selvam","year":"2020","unstructured":"Panneer Selvam Santhalingam, Al Amin Hosain, Ding Zhang, Parth Pathak, Huzefa Rangwala, and Raja Kushalnagar. 2020. mmasl: Environment-independent asl gesture recognition using 60 ghz millimeter-wave signals. In Proc. of ACM IMWUT, Vol. 4. 1--30.","journal-title":"Proc. of ACM IMWUT"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1109\/RADAR.2018.8378629"},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICRA48506.2021.9562089"},{"key":"e_1_2_1_55_1","volume-title":"Proc. of IEEE CVPR. 1599--1610","author":"Qin Hongwei","year":"2023","unstructured":"Xiaoyu Shi, Zhaoyang Huang, Dasong Li, Manyuan Zhang, Ka Chun Cheung, Simon See, Hongwei Qin, Jifeng Dai, and Hongsheng Li. 2023. Flowformer++: Masked cost volume autoencoding for pretraining optical flow estimation. In Proc. of IEEE CVPR. 1599--1610."},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00931"},{"key":"e_1_2_1_57_1","volume-title":"Boximator: Generating Rich and Controllable Motions for Video Synthesis. arXiv preprint arXiv:2402.01566","author":"Wang Jiawei","year":"2024","unstructured":"Jiawei Wang, Yuchen Zhang, Jiaxin Zou, Yan Zeng, Guoqiang Wei, Liping Yuan, and Hang Li. 2024. Boximator: Generating Rich and Controllable Motions for Video Synthesis. arXiv preprint arXiv:2402.01566 (2024)."},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1109\/MMSP.2019.8901772"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/3326362"},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00778-022-00775-9"},{"key":"e_1_2_1_61_1","first-page":"33330","article-title":"Point transformer v2: Grouped vector attention and partition-based pooling","volume":"35","author":"Wu Xiaoyang","year":"2022","unstructured":"Xiaoyang Wu, Yixing Lao, Li Jiang, Xihui Liu, and Hengshuang Zhao. 2022. Point transformer v2: Grouped vector attention and partition-based pooling. In Proc. of NIPS, Vol. 35. 33330--33342.","journal-title":"Proc. of NIPS"},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1145\/3485730.3485936"},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1145\/3560905.3568524"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1145\/3570361.3613302"},{"key":"e_1_2_1_65_1","volume-title":"Proc. of ACM MobiCom. 268--281","author":"Zhang Fusang","year":"2022","unstructured":"Fusang Zhang, Jie Xiong, Zhaoxin Chang, Junqi Ma, and Daqing Zhang. 2022. Mobi2Sense: empowering wireless sensing with mobility. In Proc. of ACM MobiCom. 268--281."},{"key":"e_1_2_1_66_1","first-page":"7327","article-title":"Real-time and accurate gesture recognition with commercial RFID devices","volume":"22","author":"Zhang Shigeng","year":"2022","unstructured":"Shigeng Zhang, Zijing Ma, Chengwei Yang, Xiaoyan Kui, Xuan Liu, Weiping Wang, Jianxin Wang, and Song Guo. 2022. Real-time and accurate gesture recognition with commercial RFID devices. IEEE Transactions on Mobile Computing 22, 12 (2022), 7327--7342.","journal-title":"IEEE Transactions on Mobile Computing"},{"key":"e_1_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1145\/3560905.3568542"},{"key":"e_1_2_1_68_1","first-page":"8671","article-title":"Widar3. 0: Zero-effort cro ss-domain gesture recognition with Wi-Fi","volume":"44","author":"Zhang Yi","year":"2021","unstructured":"Yi Zhang, Yue Zheng, Kun Qian, Guidong Zhang, Yunhao Liu, Chenshu Wu, and Zheng Yang. 2021. Widar3. 0: Zero-effort cro ss-domain gesture recognition with Wi-Fi. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 11 (2021), 8671--8688.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"e_1_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01595"},{"key":"e_1_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICRA40945.2020.9197437"},{"key":"e_1_2_1_71_1","volume-title":"Unleashing text-to-image diffusion models for visual perception. arXiv preprint arXiv:2303.02153","author":"Zhao Wenliang","year":"2023","unstructured":"Wenliang Zhao, Yongming Rao, Zuyan Liu, Benlin Liu, Jie Zhou, and Jiwen Lu. 2023. Unleashing text-to-image diffusion models for visual perception. arXiv preprint arXiv:2303.02153 (2023)."},{"key":"e_1_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i4.16471"},{"key":"e_1_2_1_73_1","volume-title":"Magicvideo: Efficient video generation with latent diffusion models. arXiv preprint arXiv:2211.11018","author":"Zhou Daquan","year":"2022","unstructured":"Daquan Zhou, Weimin Wang, Hanshu Yan, Weiwei Lv, Yizhe Zhu, and Jiashi Feng. 2022. Magicvideo: Efficient video generation with latent diffusion models. arXiv preprint arXiv:2211.11018 (2022)."}],"container-title":["Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3699754","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3699754","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,25]],"date-time":"2025-09-25T16:29:46Z","timestamp":1758817786000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3699754"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,11,21]]},"references-count":73,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2024,11,21]]}},"alternative-id":["10.1145\/3699754"],"URL":"https:\/\/doi.org\/10.1145\/3699754","relation":{},"ISSN":["2474-9567"],"issn-type":[{"type":"electronic","value":"2474-9567"}],"subject":[],"published":{"date-parts":[[2024,11,21]]},"assertion":[{"value":"2024-11-21","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}