{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,6]],"date-time":"2025-12-06T17:20:47Z","timestamp":1765041647069,"version":"3.44.0"},"reference-count":37,"publisher":"Association for Computing Machinery (ACM)","issue":"2","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."],"published-print":{"date-parts":[[2025,6,9]]},"abstract":"<jats:p>Action recognition using millimeter waves (mmWave) has shown great potential in various fields. However, the state-of-the-art still falls short in terms of multi-person action recognition inclusive interactive action recognition. In this paper, we propose mmMulti, a multi-person action recognition method based on multi-task learning using millimeter waves. To this end, we first segregate the mmWave data and assign them to each of multiple persons, and propose two new input data---compressed Doppler map (CDM) and point trajectory segments (PTS)---to represent the patterns and sequential characteristics of actions. Next, we leverage ConvNeXt to extract pattern features from CDM and leverage Transformer to extract sequential features from PTS, and fuse them by a cross-attention mechanism. Finally, we custom-design a multi-task learning model to recognize independent and interactive actions from multiple concurrent persons, enabling mmMulti to recognize single-person actions, multi-person independent actions and multi-person interactive actions. We implement mmMulti on a commercial mmWave radar and conduct extensive experiments. mmMulti achieves single-person action recognition accuracy of 99.64%, independent action recognition accuracy of 91.03% for two persons, 72.38% for three persons, 64.75% for four persons, and interactive action recognition accuracy of 100% for two persons. To the best of our knowledge, mmMulti is the first work in the field of mmWave sensing to differentiate both independent and interactive actions in multi-person scenarios, based on a multi-task learning model to accomplish multiple tasks simultaneously.<\/jats:p>","DOI":"10.1145\/3729461","type":"journal-article","created":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T21:21:56Z","timestamp":1750281716000},"page":"1-25","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["mmMulti: Multi-person Action Recognition Based on Multi-task Learning Using Millimeter Waves"],"prefix":"10.1145","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7955-9418","authenticated-orcid":false,"given":"Rui","family":"Zhou","sequence":"first","affiliation":[{"name":"University of Electronic Science and Technology of China, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0287-8533","authenticated-orcid":false,"given":"Songlin","family":"Li","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0005-7751-6229","authenticated-orcid":false,"given":"Hongwang","family":"Zhang","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-0967-1332","authenticated-orcid":false,"given":"Chenxu","family":"Liu","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-7813-9194","authenticated-orcid":false,"given":"Jiajun","family":"Sun","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China, China"}]}],"member":"320","published-online":{"date-parts":[[2025,6,18]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/CSNT.2015.81"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1023\/A:1007379606734"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3610902"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00781"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/LSENS.2019.2953022"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3349624.3356765"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00069"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2024.3402356"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2023.3329236"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3432235"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2021.3098338"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/COMST.2019.2934489"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220007"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3310194"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/MCOM.2018.1800109"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1609\/AAAI.V34I01.5430"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2021.3080655"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSEN.2022.3141202"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/JCSSE.2012.6261920"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3381010"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/IPSN61024.2024.00018"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3349624.3356768"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/TAP.2021.3118805"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3383313.3412236"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.5555\/3295222.3295349"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2021.3128548"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.23919\/JCC.2021.02.012"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/2984511.2984565"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM41043.2020.9155293"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2024.3360434"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/SPSympo51155.2020.9593690"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSYST.2022.3140546"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCC56324.2022.10065810"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICFEICT57213.2022.00061"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1109\/DCOSS.2019.00028"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.adhoc.2021.102475"}],"container-title":["Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3729461","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,22]],"date-time":"2025-08-22T12:22:56Z","timestamp":1755865376000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3729461"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,9]]},"references-count":37,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2025,6,9]]}},"alternative-id":["10.1145\/3729461"],"URL":"https:\/\/doi.org\/10.1145\/3729461","relation":{},"ISSN":["2474-9567"],"issn-type":[{"type":"electronic","value":"2474-9567"}],"subject":[],"published":{"date-parts":[[2025,6,9]]},"assertion":[{"value":"2025-06-18","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}