{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,2]],"date-time":"2026-04-02T15:33:08Z","timestamp":1775143988694,"version":"3.50.1"},"reference-count":97,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2023,6,12]],"date-time":"2023-06-12T00:00:00Z","timestamp":1686528000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."],"published-print":{"date-parts":[[2023,6,12]]},"abstract":"<jats:p>The scarcity of training data available for IMUs in wearables poses a serious challenge for IMU-based American Sign Language (ASL) recognition. In this paper, we ask the following question: can we \"translate\" the large number of publicly available, in-the-wild ASL videos to their corresponding IMU data? We answer this question by presenting a video to IMU translation framework (Vi2IMU) that takes as input user videos and estimates the IMU acceleration and gyro from the perspective of user's wrist. Vi2IMU consists of two modules, a wrist orientation estimation module that accounts for wrist rotations by carefully incorporating hand joint positions, and an acceleration and gyro prediction module, that leverages the orientation for transformation while capturing the contributions of hand movements and shape to produce realistic wrist acceleration and gyro data. We evaluate Vi2IMU by translating publicly available ASL videos to their corresponding wrist IMU data and train a gesture recognition model purely using the translated data. Our results show that the model using translated data performs reasonably well compared to the same model trained using measured IMU data.<\/jats:p>","DOI":"10.1145\/3596261","type":"journal-article","created":{"date-parts":[[2023,6,12]],"date-time":"2023-06-12T18:58:16Z","timestamp":1686596296000},"page":"1-34","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":22,"title":["Synthetic Smartwatch IMU Data Generation from In-the-wild ASL Videos"],"prefix":"10.1145","volume":"7","author":[{"ORCID":"https:\/\/orcid.org\/0009-0005-4904-4260","authenticated-orcid":false,"given":"Panneer Selvam","family":"Santhalingam","sequence":"first","affiliation":[{"name":"Computer Science Department, George Mason University, Fairfax, Virginia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0793-002X","authenticated-orcid":false,"given":"Parth","family":"Pathak","sequence":"additional","affiliation":[{"name":"Computer Science Department, George Mason University, Fairfax, Virginia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0435-0035","authenticated-orcid":false,"given":"Huzefa","family":"Rangwala","sequence":"additional","affiliation":[{"name":"Computer Science Department, George Mason University, Fairfax, Virginia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4619-3277","authenticated-orcid":false,"given":"Jana","family":"Kosecka","sequence":"additional","affiliation":[{"name":"Computer Science Department, George Mason University, Fairfax, Virginia"}]}],"member":"320","published-online":{"date-parts":[[2023,6,12]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/3300061.3300117"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3517257"},{"key":"e_1_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3264746.3264805"},{"key":"e_1_2_2_4_1","first-page":"11164","article-title":"Spatial-temporal graph convolutional networks for sign language recognition","volume":"1901","author":"de Amorim C. C.","year":"2019","unstructured":"C. C. de Amorim, D. Mac\u00eado, and C. Zanchettin, \"Spatial-temporal graph convolutional networks for sign language recognition,\" CoRR, vol. abs\/1901.11164, 2019.","journal-title":"CoRR"},{"key":"e_1_2_2_5_1","first-page":"1","volume-title":"Large scale sign language interpretation,\" in 2019 14th IEEE International Conference on Automatic Face Gesture Recognition (FG","author":"Yuan T.","year":"2019","unstructured":"T. Yuan, S. Sah, T. Ananthanarayana, C. Zhang, A. Bhat, S. Gandhi, and R. Ptucha, \"Large scale sign language interpretation,\" in 2019 14th IEEE International Conference on Automatic Face Gesture Recognition (FG 2019), pp. 1--5, May 2019."},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3131672.3131693"},{"key":"e_1_2_2_7_1","first-page":"01053","article-title":"MS-ASL: A large-scale data set and benchmark for understanding american sign language","volume":"1812","author":"Joze H. R. V.","year":"2018","unstructured":"H. R. V. Joze and O. Koller, \"MS-ASL: A large-scale data set and benchmark for understanding american sign language,\" CoRR, vol. abs\/1812.01053, 2018.","journal-title":"CoRR"},{"key":"e_1_2_2_8_1","first-page":"1448","volume-title":"Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison,\" in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV)","author":"Li D.","year":"2020","unstructured":"D. Li, C. R. Opazo, X. Yu, and H. Li, \"Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison,\" in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1448--1458, 2020."},{"key":"e_1_2_2_9_1","first-page":"44","volume-title":"American sign language alphabet recognition using microsoft kinect,\" in 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","author":"Dong C.","year":"2015","unstructured":"C. Dong, M. C. Leu, and Z. Yin, \"American sign language alphabet recognition using microsoft kinect,\" in 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 44--52, June 2015."},{"key":"e_1_2_2_10_1","first-page":"1","volume-title":"Sign language recognition using 3d convolutional neural networks,\" in 2015 IEEE International Conference on Multimedia and Expo (ICME)","author":"Huang J.","year":"2015","unstructured":"J. Huang, W. Zhou, H. Li, and W. Li, \"Sign language recognition using 3d convolutional neural networks,\" in 2015 IEEE International Conference on Multimedia and Expo (ICME), pp. 1--6, June 2015."},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450268.3453537"},{"key":"e_1_2_2_12_1","first-page":"2021","volume-title":"Approaching the real-world: Supporting activity recognition training with virtual imu data,\" Proc. ACM Interact. Mob. Wearable Ubiquitous Technol","author":"Kwon H.","unstructured":"H. Kwon, B. Wang, G. D. Abowd, and T. Pl\u00f6tz, \"Approaching the real-world: Supporting activity recognition training with virtual imu data,\" Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 5, sep 2021."},{"key":"e_1_2_2_13_1","first-page":"261","volume-title":"A deep learning method for complex human activity recognition using virtual wearable sensors,\" in Spatial Data and Intelligence","author":"Xiao F.","year":"2021","unstructured":"F. Xiao, L. Pei, L. Chu, D. Zou, W. Yu, Y. Zhu, and T. Li, \"A deep learning method for complex human activity recognition using virtual wearable sensors,\" in Spatial Data and Intelligence (X. Meng, X. Xie, Y. Yue, and Z. Ding, eds.), (Cham), pp. 261--270, Springer International Publishing, 2021."},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3341162.3345590"},{"key":"e_1_2_2_15_1","volume-title":"UbiComp-ISWC '20, (New York","author":"Zhang S.","year":"2020","unstructured":"S. Zhang and N. Alshurafa, \"Deep generative cross-modal on-body accelerometer data synthesis from videos,\" in Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers, UbiComp-ISWC '20, (New York, NY, USA), p. 223--227, Association for Computing Machinery, 2020."},{"key":"e_1_2_2_16_1","first-page":"2018","article-title":"Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time","volume":"37","author":"Huang Y.","unstructured":"Y. Huang, M. Kaufmann, E. Aksan, M. J. Black, O. Hilliges, and G. Pons-Moll, \"Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time,\" ACM Trans. Graph., vol. 37, dec 2018.","journal-title":"ACM Trans. Graph."},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411841"},{"key":"e_1_2_2_18_1","volume-title":"Translating videos into synthetic training data for wearable sensor-based activity recognition systems using residual deep convolutional networks,\" Applied Sciences","author":"Fortes Rey V.","unstructured":"V. Fortes Rey, K. K. Garewal, and P. Lukowicz, \"Translating videos into synthetic training data for wearable sensor-based activity recognition systems using residual deep convolutional networks,\" Applied Sciences, vol. 11, no. 7, 2021."},{"key":"e_1_2_2_19_1","volume-title":"UbiComp '18, (New York","author":"Takeda S.","year":"2018","unstructured":"S. Takeda, T. Okita, P. Lago, and S. Inoue, \"A multi-sensor setting activity recognition simulation tool,\" in Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, UbiComp '18, (New York, NY, USA), p. 1444--1448, Association for Computing Machinery, 2018."},{"key":"e_1_2_2_20_1","unstructured":"\"Asl mom.\" https:\/\/www.signingsavvy.com\/sign\/mom."},{"key":"e_1_2_2_21_1","unstructured":"\"Asl color.\" https:\/\/www.signingsavvy.com\/sign\/COLOR\/1136\/1."},{"key":"e_1_2_2_22_1","unstructured":"\"Asl slow.\" https:\/\/www.signingsavvy.com\/."},{"key":"e_1_2_2_23_1","volume-title":"Openpose: Realtime multi-person 2d pose estimation using part affinity fields","author":"Cao Z.","year":"2019","unstructured":"Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh, \"Openpose: Realtime multi-person 2d pose estimation using part affinity fields,\" 2019."},{"key":"e_1_2_2_24_1","volume-title":"Hand keypoint detection in single images using multiview bootstrapping,\" in CVPR","author":"Simon T.","year":"2017","unstructured":"T. Simon, H. Joo, I. Matthews, and Y. Sheikh, \"Hand keypoint detection in single images using multiview bootstrapping,\" in CVPR, 2017."},{"key":"e_1_2_2_25_1","volume-title":"Realtime multi-person 2d pose estimation using part affinity fields,\" in CVPR","author":"Cao Z.","year":"2017","unstructured":"Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, \"Realtime multi-person 2d pose estimation using part affinity fields,\" in CVPR, 2017."},{"key":"e_1_2_2_26_1","volume-title":"Oct","author":"Martinez J.","year":"2017","unstructured":"J. Martinez, R. Hossain, J. Romero, and J. J. Little, \"A simple yet effective baseline for 3d human pose estimation,\" in Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017."},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-016-0742-0"},{"key":"e_1_2_2_28_1","unstructured":"\"Asl brother.\" https:\/\/www.signingsavvy.com\/sign\/BROTHER\/57\/1."},{"key":"e_1_2_2_29_1","unstructured":"A. F. Agarap \"Deep learning using rectified linear units (relu) \" 2019."},{"key":"e_1_2_2_30_1","volume-title":"UbiComp '18, (New York","author":"Takeda S.","year":"2018","unstructured":"S. Takeda, T. Okita, P. Lago, and S. Inoue, \"A multi-sensor setting activity recognition simulation tool,\" in Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, UbiComp '18, (New York, NY, USA), p. 1444--1448, Association for Computing Machinery, 2018."},{"key":"e_1_2_2_31_1","first-page":"01874","article-title":"A deep learning method for complex human activity recognition using virtual wearable sensors","volume":"2003","author":"Xiao F.","year":"2020","unstructured":"F. Xiao, L. Pei, L. Chu, D. Zou, W. Yu, Y. Zhu, and T. Li, \"A deep learning method for complex human activity recognition using virtual wearable sensors,\" CoRR, vol. abs\/2003.01874, 2020.","journal-title":"CoRR"},{"key":"e_1_2_2_32_1","unstructured":"https:\/\/www.signingsavvy.com\/sign\/NO\/291\/1."},{"key":"e_1_2_2_33_1","first-page":"9","volume-title":"HotMobile '15","author":"Xu C.","unstructured":"C. Xu, P. H. Pathak, and P. Mohapatra, \"Finger-writing with smartwatch: A case for finger and hand gesture recognition using smartwatch,\" in Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications, HotMobile '15, (New York, NY, USA), p. 9--14, Association for Computing Machinery, 2015."},{"key":"e_1_2_2_34_1","first-page":"162","volume-title":"MobiSys '14","author":"Gummeson J.","unstructured":"J. Gummeson, B. Priyantha, and J. Liu, \"An energy harvesting wearable ring platform for gestureinput on surfaces,\" in Proceedings of the 12th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys '14, (New York, NY, USA), p. 162--175, Association for Computing Machinery, 2014."},{"key":"e_1_2_2_35_1","volume-title":"Estimation of 3d body center of mass acceleration and instantaneous velocity from a wearable inertial sensor network in transfemoral amputee gait: A case study,\" Sensors","author":"Simonetti E.","unstructured":"E. Simonetti, E. Bergamini, G. Vannozzi, J. Bascou, and H. Pillet, \"Estimation of 3d body center of mass acceleration and instantaneous velocity from a wearable inertial sensor network in transfemoral amputee gait: A case study,\" Sensors, vol. 21, no. 9, 2021."},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.gaitpost.2009.11.003"},{"key":"e_1_2_2_37_1","unstructured":"E. Munguia Tapia Using machine learning for real-time activity recognition and estimation of energy expenditure. PhD thesis Massachusetts Institute of Technology 2008."},{"key":"e_1_2_2_38_1","first-page":"08114","article-title":"A survey on multi-task learning","volume":"1707","author":"Zhang Y.","year":"2017","unstructured":"Y. Zhang and Q. Yang, \"A survey on multi-task learning,\" CoRR, vol. abs\/1707.08114, 2017.","journal-title":"CoRR"},{"key":"e_1_2_2_39_1","first-page":"05098","article-title":"An overview of multi-task learning in deep neural networks","volume":"1706","author":"Ruder S.","year":"2017","unstructured":"S. Ruder, \"An overview of multi-task learning in deep neural networks,\" CoRR, vol. abs\/1706.05098, 2017.","journal-title":"CoRR"},{"key":"e_1_2_2_40_1","unstructured":"\"Asl finish.\" https:\/\/www.signingsavvy.com\/sign\/FINISH\/149\/1."},{"key":"e_1_2_2_41_1","unstructured":"\"Azure kinect dk.\" https:\/\/azure.microsoft.com\/en-us\/services\/kinect-dk\/."},{"key":"e_1_2_2_42_1","unstructured":"\"Google pixel 41.\" https:\/\/store.google.com\/us\/product\/pixel_4a."},{"key":"e_1_2_2_43_1","unstructured":"\"Azure body trakcing api.\" https:\/\/docs.microsoft.com\/en-us\/azure\/kinect-dk\/get-body-tracking-results."},{"key":"e_1_2_2_44_1","volume-title":"Pytorch: An imperative style, high-performance deep learning library","author":"Paszke A.","year":"2019","unstructured":"A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. K\u00f6pf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, \"Pytorch: An imperative style, high-performance deep learning library,\" 2019."},{"key":"e_1_2_2_45_1","volume-title":"Adam: A method for stochastic optimization","author":"Kingma D. P.","year":"2017","unstructured":"D. P. Kingma and J. Ba, \"Adam: A method for stochastic optimization,\" 2017."},{"key":"e_1_2_2_46_1","first-page":"402","volume-title":"Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping,\" Advances in neural information processing systems","author":"Caruana R.","year":"2001","unstructured":"R. Caruana, S. Lawrence, and L. Giles, \"Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping,\" Advances in neural information processing systems, pp. 402--408, 2001."},{"key":"e_1_2_2_47_1","unstructured":"\"Ridge regression.\" https:\/\/en.wikipedia.org\/wiki\/Ridge_regression."},{"key":"e_1_2_2_48_1","unstructured":"\"L2 norm.\" https:\/\/mathworld.wolfram.com\/L2-Norm.html."},{"key":"e_1_2_2_49_1","unstructured":"\"Asl hurt.\" https:\/\/www.signingsavvy.com\/search\/hurt."},{"key":"e_1_2_2_50_1","unstructured":"\"Asl cheese.\" https:\/\/www.signingsavvy.com\/search\/cheese."},{"key":"e_1_2_2_51_1","unstructured":"\"Accuracy metrics.\" https:\/\/en.wikipedia.org\/wiki\/Accuracy_and_precision."},{"key":"e_1_2_2_52_1","unstructured":"\"vi2imu dataset.\" https:\/\/github.com\/sensys2022\/Vi2IMU."},{"key":"e_1_2_2_53_1","unstructured":"\"Asl sister.\" https:\/\/www.signingsavvy.com\/sign\/SISTER\/392\/1."},{"key":"e_1_2_2_54_1","unstructured":"\"Rotation matrix.\" https:\/\/www.handspeak.com\/word\/search\/index.php?id=1992."},{"key":"e_1_2_2_55_1","unstructured":"\"Rotation matrix.\" https:\/\/www.lingvano.com\/asl\/blog\/please-in-sign-language\/."},{"key":"e_1_2_2_56_1","volume-title":"June","author":"Patricia N.","year":"2014","unstructured":"N. Patricia and B. Caputo, \"Learning to learn, from transfer learning to domain adaptation: A unifying perspective,\" in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014."},{"key":"e_1_2_2_57_1","first-page":"11806","article-title":"An introduction to domain adaptation and transfer learning","volume":"1812","author":"Kouw W. M.","year":"2018","unstructured":"W. M. Kouw, \"An introduction to domain adaptation and transfer learning,\" CoRR, vol. abs\/1812.11806, 2018.","journal-title":"CoRR"},{"key":"e_1_2_2_58_1","doi-asserted-by":"publisher","DOI":"10.1109\/JBHI.2016.2598302"},{"key":"e_1_2_2_59_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TMC.2023.3235935","article-title":"Multimodal fusion framework based on statistical attention and contrastive attention for sign language recognition","author":"Zhang J.","year":"2023","unstructured":"J. Zhang, Q. Wang, Q. Wang, and Z. Zheng, \"Multimodal fusion framework based on statistical attention and contrastive attention for sign language recognition,\" IEEE Transactions on Mobile Computing, pp. 1--13, 2023.","journal-title":"IEEE Transactions on Mobile Computing"},{"key":"e_1_2_2_60_1","doi-asserted-by":"publisher","DOI":"10.1007\/s12530-020-09365-y"},{"key":"e_1_2_2_61_1","first-page":"2735","article-title":"How2sign: A large-scale multimodal dataset for continuous american sign language","author":"Duarte A.","year":"2021","unstructured":"A. Duarte, S. Palaskar, L. Ventura, D. Ghadiyaram, K. DeHaan, F. Metze, J. Torres, and X. Giro-i Nieto, \"How2sign: A large-scale multimodal dataset for continuous american sign language,\" in Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2735--2744, June 2021.","journal-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)"},{"key":"e_1_2_2_62_1","first-page":"1459","article-title":"Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison","author":"Li D.","year":"2020","unstructured":"D. Li, C. Rodriguez, X. Yu, and H. Li, \"Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison,\" in Proceedings of the IEEE\/CVF winter conference on applications of computer vision, pp. 1459--1469, 2020.","journal-title":"Proceedings of the IEEE\/CVF winter conference on applications of computer vision"},{"key":"e_1_2_2_63_1","first-page":"182","article-title":"Sign pose-based transformer for word-level sign language recognition","author":"Boh\u00e1\u010dek M.","year":"2022","unstructured":"M. Boh\u00e1\u010dek and M. Hr\u00faz, \"Sign pose-based transformer for word-level sign language recognition,\" in Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, pp. 182--191, 2022.","journal-title":"Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision"},{"key":"e_1_2_2_64_1","unstructured":"\"Wikipedia language model.\" https:\/\/nlp.cs.nyu.edu\/wikipedia-data\/."},{"key":"e_1_2_2_65_1","volume-title":"ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"1","author":"Guan Y.","year":"2017","unstructured":"Y. Guan and T. Pl\u00f6tz, \"Ensembles of deep lstm learners for activity recognition using wearables,\" Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 1, June 2017."},{"key":"e_1_2_2_66_1","first-page":"1","volume-title":"Human activity recognition with inertial sensors using a deep learning approach,\" in 2016 IEEE SENSORS","author":"Zebin T.","year":"2016","unstructured":"T. Zebin, P. J. Scully, and K. B. Ozanyan, \"Human activity recognition with inertial sensors using a deep learning approach,\" in 2016 IEEE SENSORS, pp. 1--3, 2016."},{"key":"e_1_2_2_67_1","volume-title":"Surv.","volume":"46","author":"Bulling A.","year":"2014","unstructured":"A. Bulling, U. Blanke, and B. Schiele, \"A tutorial on human activity recognition using body-worn inertial sensors,\" ACM Comput. Surv., vol. 46, Jan. 2014."},{"key":"e_1_2_2_68_1","volume-title":"Complex deep neural networks from large scale virtual imu data for effective human activity recognition using wearables,\" Sensors","author":"Kwon H.","unstructured":"H. Kwon, G. D. Abowd, and T. Pl\u00f6tz, \"Complex deep neural networks from large scale virtual imu data for effective human activity recognition using wearables,\" Sensors, vol. 21, no. 24, 2021."},{"key":"e_1_2_2_69_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0227039"},{"key":"e_1_2_2_70_1","doi-asserted-by":"publisher","DOI":"10.1145\/3414117"},{"key":"e_1_2_2_71_1","first-page":"499","volume-title":"MA)","author":"Gowda M.","year":"2017","unstructured":"M. Gowda, A. Dhekne, S. Shen, R. R. Choudhury, L. Yang, S. Golwalkar, and A. Essanian, \"Bringing iot to sports analytics,\" in 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17), (Boston, MA), pp. 499--513, USENIX Association, Mar. 2017."},{"key":"e_1_2_2_72_1","doi-asserted-by":"publisher","DOI":"10.1145\/3191789.3191793"},{"key":"e_1_2_2_73_1","volume-title":"Exploiting imu sensors for iot enabled health monitoring,\" IoT of Health '16, (New York","author":"Chandel V.","year":"2016","unstructured":"V. Chandel, A. Sinharay, N. Ahmed, and A. Ghose, \"Exploiting imu sensors for iot enabled health monitoring,\" IoT of Health '16, (New York, NY, USA), p. 21--22, Association for Computing Machinery, 2016."},{"key":"e_1_2_2_74_1","first-page":"614","volume-title":"ICMI '20","author":"Ahmed T.","unstructured":"T. Ahmed, M. Y. Ahmed, M. M. Rahman, E. Nemati, B. Islam, K. Vatanparvar, V. Nathan, D. McCaffrey, J. Kuang, and J. A. Gao, \"Automated time synchronization of cough events from multimodal sensors in mobile devices,\" in Proceedings of the 2020 International Conference on Multimodal Interaction, ICMI '20, (New York, NY, USA), p. 614--619, Association for Computing Machinery, 2020."},{"key":"e_1_2_2_75_1","article-title":"Bikenet: A mobile sensing system for cyclist experience mapping","volume":"6","author":"Eisenman S. B.","year":"2010","unstructured":"S. B. Eisenman, E. Miluzzo, N. D. Lane, R. A. Peterson, G.-S. Ahn, and A. T. Campbell, \"Bikenet: A mobile sensing system for cyclist experience mapping,\" ACM Trans. Sen. Netw., vol. 6, Jan. 2010.","journal-title":"ACM Trans. Sen. Netw."},{"key":"e_1_2_2_76_1","first-page":"89","volume-title":"SUI '15","author":"Hincapi\u00e9-Ramos J. D.","unstructured":"J. D. Hincapi\u00e9-Ramos, K. Ozacar, P. P. Irani, and Y. Kitamura, \"Gyrowand: Imu-based raycasting for augmented reality head-mounted displays,\" in Proceedings of the 3rd ACM Symposium on Spatial User Interaction, SUI '15, (New York, NY, USA), p. 89--98, Association for Computing Machinery, 2015."},{"key":"e_1_2_2_77_1","first-page":"429","volume-title":"MobiCom '18","author":"Shen S.","unstructured":"S. Shen, M. Gowda, and R. Roy Choudhury, \"Closing the gaps in inertial motion tracking,\" in Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, MobiCom '18, (New York, NY, USA), p. 429--444, Association for Computing Machinery, 2018."},{"key":"e_1_2_2_78_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSEN.2014.2382568"},{"key":"e_1_2_2_79_1","first-page":"429","volume-title":"MobiCom '18","author":"Shen S.","unstructured":"S. Shen, M. Gowda, and R. Roy Choudhury, \"Closing the gaps in inertial motion tracking,\" in Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, MobiCom '18, (New York, NY, USA), p. 429--444, Association for Computing Machinery, 2018."},{"key":"e_1_2_2_80_1","first-page":"197","volume-title":"MobiSys '12","author":"Wang H.","unstructured":"H. Wang, S. Sen, A. Elgohary, M. Farid, M. Youssef, and R. R. Choudhury, \"No need to war-drive: Unsupervised indoor localization,\" in Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services, MobiSys '12, (New York, NY, USA), p. 197--210, Association for Computing Machinery, 2012."},{"key":"e_1_2_2_81_1","volume-title":"June","author":"Andriluka M.","year":"2014","unstructured":"M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele, \"2d human pose estimation: New benchmark and state of the art analysis,\" in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014."},{"key":"e_1_2_2_82_1","volume-title":"July","author":"Chu X.","year":"2017","unstructured":"X. Chu, W. Yang, W. Ouyang, C. Ma, A. L. Yuille, and X. Wang, \"Multi-context attention for human pose estimation,\" in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017."},{"key":"e_1_2_2_83_1","volume-title":"September","author":"Xiao B.","year":"2018","unstructured":"B. Xiao, H. Wu, and Y. Wei, \"Simple baselines for human pose estimation and tracking,\" in Proceedings of the European Conference on Computer Vision (ECCV), September 2018."},{"key":"e_1_2_2_84_1","first-page":"332","volume-title":"I. Reid, H. Saito, and M.-H","author":"Li S.","year":"2015","unstructured":"S. Li and A. B. Chan, \"3d human pose estimation from monocular images with deep convolutional neural network,\" in Computer Vision -- ACCV 2014 (D. Cremers, I. Reid, H. Saito, and M.-H. Yang, eds.), (Cham), pp. 332--347, Springer International Publishing, 2015."},{"key":"e_1_2_2_85_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2011.33"},{"key":"e_1_2_2_86_1","doi-asserted-by":"publisher","DOI":"10.1109\/3DV.2018.00024"},{"key":"e_1_2_2_87_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2006.21"},{"key":"e_1_2_2_88_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46484-8_29"},{"key":"e_1_2_2_89_1","volume-title":"July","author":"Chen C.-H.","year":"2017","unstructured":"C.-H. Chen and D. Ramanan, \"3d human pose estimation = 2d pose estimation + matching,\" in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017."},{"key":"e_1_2_2_90_1","first-page":"1","article-title":"Total capture: 3d human pose estimation fusing video and inertial sensors","volume":"2","author":"Trumble M.","year":"2017","unstructured":"M. Trumble, A. Gilbert, C. Malleson, A. Hilton, and J. P. Collomosse, \"Total capture: 3d human pose estimation fusing video and inertial sensors.,\" in BMVC, vol. 2, pp. 1--13, 2017.","journal-title":"BMVC"},{"key":"e_1_2_2_91_1","first-page":"92","volume-title":"Kinect=imu? learning mimo signal mappings to automatically translate activity recognition systems across sensor modalities,\" in 2012 16th International Symposium on Wearable Computers","author":"Banos O.","year":"2012","unstructured":"O. Banos, A. Calatroni, M. Damas, H. Pomares, I. Rojas, H. Sagha, J. del R. Mill'n, G. Troster, R. Chavarriaga, and D. Roggen, \"Kinect=imu? learning mimo signal mappings to automatically translate activity recognition systems across sensor modalities,\" in 2012 16th International Symposium on Wearable Computers, pp. 92--99, 2012."},{"key":"e_1_2_2_92_1","volume-title":"ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"4","author":"Cai H.","year":"2020","unstructured":"H. Cai, B. Korany, C. R. Karanam, and Y. Mostofi, \"Teaching rf to sense without rf training measurements,\" Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 4, Dec. 2020."},{"key":"e_1_2_2_93_1","volume-title":"CHI '21","author":"Ahuja K.","year":"2021","unstructured":"K. Ahuja, Y. Jiang, M. Goel, and C. Harrison, \"Vid2doppler: Synthesizing doppler radar data from videos for training privacy-preserving activity recognition,\" in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI '21, (New York, NY, USA), Association for Computing Machinery, 2021."},{"key":"e_1_2_2_94_1","first-page":"199","volume-title":"Imusim: A simulation environment for inertial sensing algorithm design and evaluation,\" in Proceedings of the 10th ACM\/IEEE International Conference on Information Processing in Sensor Networks","author":"Young A. D.","year":"2011","unstructured":"A. D. Young, M. J. Ling, and D. K. Arvind, \"Imusim: A simulation environment for inertial sensing algorithm design and evaluation,\" in Proceedings of the 10th ACM\/IEEE International Conference on Information Processing in Sensor Networks, pp. 199--210, 2011."},{"key":"e_1_2_2_95_1","volume-title":"Cromosim: A deep learning-based cross-modality inertial measurement simulator","author":"Hao Y.","year":"2022","unstructured":"Y. Hao, B. Wang, and R. Zheng, \"Cromosim: A deep learning-based cross-modality inertial measurement simulator,\" 2022."},{"key":"e_1_2_2_96_1","volume-title":"ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"5","author":"Tong C.","year":"2022","unstructured":"C. Tong, J. Ge, and N. D. Lane, \"Zero-shot learning for imu-based activity recognition using video embeddings,\" Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 5, dec 2022."},{"key":"e_1_2_2_97_1","volume-title":"ACM Interact. Mob. Wearable Ubiquitous Technol.","volume":"5","author":"Bhalla S.","year":"2022","unstructured":"S. Bhalla, M. Goel, and R. Khurana, \"Imu2doppler: Cross-modal domain adaptation for doppler-based activity recognition using imu data,\" Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 5, dec 2022."}],"container-title":["Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3596261","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3596261","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,14]],"date-time":"2025-07-14T04:46:15Z","timestamp":1752468375000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3596261"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,12]]},"references-count":97,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2023,6,12]]}},"alternative-id":["10.1145\/3596261"],"URL":"https:\/\/doi.org\/10.1145\/3596261","relation":{},"ISSN":["2474-9567"],"issn-type":[{"value":"2474-9567","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,6,12]]},"assertion":[{"value":"2023-06-12","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}