{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T18:09:48Z","timestamp":1776103788632,"version":"3.50.1"},"reference-count":87,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2021,6,23]],"date-time":"2021-06-23T00:00:00Z","timestamp":1624406400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."],"published-print":{"date-parts":[[2021,6,23]]},"abstract":"<jats:p>Current Virtual Reality (VR) systems are bereft of stylization and embellishment of the user's motion - concepts that have been well explored in animations for games and movies. We present CooIMoves, a system for expressive and accentuated full-body motion synthesis of a user's virtual avatar in real-time, from the limited input cues afforded by current consumer-grade VR systems, specifically headset and hand positions. We make use of existing motion capture databases as a template motion repository to draw from. We match similar spatio-temporal motions present in the database and then interpolate between them using a weighted distance metric. Joint prediction probability is then used to temporally smooth the synthesized motion, using human motion dynamics as a prior. This allows our system to work well even with very sparse motion databases (e.g., with only 3-5 motions per action). We validate our system with four experiments: a technical evaluation of our quantitative pose reconstruction and three additional user studies to evaluate the motion quality, embodiment and agency.<\/jats:p>","DOI":"10.1145\/3463499","type":"journal-article","created":{"date-parts":[[2021,6,24]],"date-time":"2021-06-24T16:29:19Z","timestamp":1624552159000},"page":"1-23","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":64,"title":["CoolMoves"],"prefix":"10.1145","volume":"5","author":[{"given":"Karan","family":"Ahuja","sequence":"first","affiliation":[{"name":"Carnegie Mellon University, USA"}]},{"given":"Eyal","family":"Ofek","sequence":"additional","affiliation":[{"name":"Microsoft Research, USA"}]},{"given":"Mar","family":"Gonzalez-Franco","sequence":"additional","affiliation":[{"name":"Microsoft Research, USA"}]},{"given":"Christian","family":"Holz","sequence":"additional","affiliation":[{"name":"ETH Z\u00fcrich, Switzerland"}]},{"given":"Andrew D.","family":"Wilson","sequence":"additional","affiliation":[{"name":"Microsoft Research, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,6,24]]},"reference":[{"key":"e_1_2_2_1_1","unstructured":"2004. CMU MoCap. http:\/\/mocap.cs.cmu.edu\/. 2004. CMU MoCap. http:\/\/mocap.cs.cmu.edu\/."},{"key":"e_1_2_2_2_1","unstructured":"2018. RootMotion Final IK. https:\/\/assetstore.unity.com\/packages\/tools\/animation\/final-ik-14290. 2018. RootMotion Final IK. https:\/\/assetstore.unity.com\/packages\/tools\/animation\/final-ik-14290."},{"key":"e_1_2_2_3_1","unstructured":"2020. FIFA2020. https:\/\/www.ea.com\/en-au\/games\/fifa. 2020. FIFA2020. https:\/\/www.ea.com\/en-au\/games\/fifa."},{"key":"e_1_2_2_4_1","unstructured":"2020. HTC Vive. https:\/\/www.vive.com\/. 2020. HTC Vive. https:\/\/www.vive.com\/."},{"key":"e_1_2_2_5_1","unstructured":"2020. Microsoft Kinect. https:\/\/azure.microsoft.com\/en-us\/services\/kinect-dk\/. 2020. Microsoft Kinect. https:\/\/azure.microsoft.com\/en-us\/services\/kinect-dk\/."},{"key":"e_1_2_2_6_1","unstructured":"2020. Microsoft Mixed Reality. https:\/\/www.microsoft.com\/en-us\/windows\/windows-mixed-reality\/. 2020. Microsoft Mixed Reality. https:\/\/www.microsoft.com\/en-us\/windows\/windows-mixed-reality\/."},{"key":"e_1_2_2_7_1","unstructured":"2020. Motion Capture - Meta Motion sells Motion Capture Hardware and Software. https:\/\/metamotion.com\/ 2020. Motion Capture - Meta Motion sells Motion Capture Hardware and Software. https:\/\/metamotion.com\/"},{"key":"e_1_2_2_8_1","unstructured":"2020. NBA2K. https:\/\/nba.2k.com\/2k20\/en-US\/. 2020. NBA2K. https:\/\/nba.2k.com\/2k20\/en-US\/."},{"key":"e_1_2_2_9_1","unstructured":"2020. NintendoSwitch. https:\/\/www.nintendo.com\/switch\/. 2020. NintendoSwitch. https:\/\/www.nintendo.com\/switch\/."},{"key":"e_1_2_2_10_1","unstructured":"2020. Oculus. https:\/\/www.oculus.com\/. 2020. Oculus. https:\/\/www.oculus.com\/."},{"key":"e_1_2_2_11_1","unstructured":"2020. Optitrack. https:\/\/www.optitrack.com\/. 2020. Optitrack. https:\/\/www.optitrack.com\/."},{"key":"e_1_2_2_12_1","unstructured":"2020. Vicon. https:\/\/www.vicon.com\/. 2020. Vicon. https:\/\/www.vicon.com\/."},{"key":"e_1_2_2_13_1","volume-title":"Unpaired Motion Style Transfer from Video to Animation. arXiv preprint arXiv:2005.05751","author":"Aberman Kfir","year":"2020","unstructured":"Kfir Aberman , Yijia Weng , Dani Lischinski , Daniel Cohen-Or , and Baoquan Chen . 2020. Unpaired Motion Style Transfer from Video to Animation. arXiv preprint arXiv:2005.05751 ( 2020 ). Kfir Aberman, Yijia Weng, Dani Lischinski, Daniel Cohen-Or, and Baoquan Chen. 2020. Unpaired Motion Style Transfer from Video to Animation. arXiv preprint arXiv:2005.05751 (2020)."},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300752"},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/FSKD.2015.7382353"},{"key":"e_1_2_2_16_1","volume-title":"BodySLAM: Opportunistic User Digitization in Multi-User AR\/VR Experiences. In Symposium on Spatial User Interaction. 1--8.","author":"Mayank Goel Karan Ahuja","year":"2020","unstructured":"Karan Ahuja Mayank Goel , and Chris Harrison . 2020 . BodySLAM: Opportunistic User Digitization in Multi-User AR\/VR Experiences. In Symposium on Spatial User Interaction. 1--8. Karan Ahuja Mayank Goel, and Chris Harrison. 2020. BodySLAM: Opportunistic User Digitization in Multi-User AR\/VR Experiences. In Symposium on Spatial User Interaction. 1--8."},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3332165.3347889"},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3214260"},{"key":"e_1_2_2_19_1","doi-asserted-by":"crossref","first-page":"175","DOI":"10.1080\/00031305.1992.10475879","article-title":"An introduction to kernel and nearest-neighbor nonparametric regression","volume":"46","author":"Altman Naomi S","year":"1992","unstructured":"Naomi S Altman . 1992 . An introduction to kernel and nearest-neighbor nonparametric regression . The American Statistician 46 , 3 (1992), 175 -- 185 . Naomi S Altman. 1992. An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician 46, 3 (1992), 175--185.","journal-title":"The American Statistician"},{"key":"e_1_2_2_20_1","volume-title":"Asian conference on computer vision. Springer, 136--153","author":"Aneja Deepali","year":"2016","unstructured":"Deepali Aneja , Alex Colburn , Gary Faigin , Linda Shapiro , and Barbara Mones . 2016 . Modeling stylized character expressions via deep learning . In Asian conference on computer vision. Springer, 136--153 . Deepali Aneja, Alex Colburn, Gary Faigin, Linda Shapiro, and Barbara Mones. 2016. Modeling stylized character expressions via deep learning. In Asian conference on computer vision. Springer, 136--153."},{"key":"e_1_2_2_21_1","volume-title":"Gpu-fs-knn: A software tool for fast and scalable knn computation using gpus. PloS one 7, 8","author":"Arefin Ahmed Shamsul","year":"2012","unstructured":"Ahmed Shamsul Arefin , Carlos Riveros , Regina Berretta , and Pablo Moscato . 2012 . Gpu-fs-knn: A software tool for fast and scalable knn computation using gpus. PloS one 7, 8 (2012). Ahmed Shamsul Arefin, Carlos Riveros, Regina Berretta, and Pablo Moscato. 2012. Gpu-fs-knn: A software tool for fast and scalable knn computation using gpus. PloS one 7, 8 (2012)."},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/2858036.2858226"},{"key":"e_1_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3131785.3131794"},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISMAR.2004.23"},{"key":"e_1_2_2_25_1","volume-title":"Generating Emotive Gaits for Virtual Agents Using Affect-Based Autoregression. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 24--35","author":"Bhattacharya Uttaran","year":"2020","unstructured":"Uttaran Bhattacharya , Nicholas Rewkowski , Pooja Guhan , Niall L Williams , Trisha Mittal , Aniket Bera , and Dinesh Manocha . 2020 . Generating Emotive Gaits for Virtual Agents Using Affect-Based Autoregression. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 24--35 . Uttaran Bhattacharya, Nicholas Rewkowski, Pooja Guhan, Niall L Williams, Trisha Mittal, Aniket Bera, and Dinesh Manocha. 2020. Generating Emotive Gaits for Virtual Agents Using Affect-Based Autoregression. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 24--35."},{"key":"e_1_2_2_26_1","volume-title":"Full-body illusions and minimal phenomenal selfhood. Trends in cognitive sciences 13, 1","author":"Blanke Olaf","year":"2009","unstructured":"Olaf Blanke and Thomas Metzinger . 2009. Full-body illusions and minimal phenomenal selfhood. Trends in cognitive sciences 13, 1 ( 2009 ), 7--13. Olaf Blanke and Thomas Metzinger. 2009. Full-body illusions and minimal phenomenal selfhood. Trends in cognitive sciences 13, 1 (2009), 7--13."},{"key":"e_1_2_2_27_1","volume-title":"Rubber hands 'feel'touch that eyes see. Nature 391, 6669","author":"Botvinick Matthew","year":"1998","unstructured":"Matthew Botvinick and Jonathan Cohen . 1998. Rubber hands 'feel'touch that eyes see. Nature 391, 6669 ( 1998 ), 756. Matthew Botvinick and Jonathan Cohen. 1998. Rubber hands 'feel'touch that eyes see. Nature 391, 6669 (1998), 756."},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2018.2794658"},{"key":"e_1_2_2_29_1","unstructured":"Jerry Brunner. 2019. Repeated measurement analysis of binary responses. http:\/\/www.utstat.toronto.edu\/~brunner\/workshops\/mixed\/ Jerry Brunner. 2019. Repeated measurement analysis of binary responses. http:\/\/www.utstat.toronto.edu\/~brunner\/workshops\/mixed\/"},{"key":"e_1_2_2_30_1","volume-title":"New approach to early sketch processing. Making Pen-Based Interaction Intelligent and Natural","author":"Cates Sonya","year":"2004","unstructured":"Sonya Cates and Randall Davis . 2004. New approach to early sketch processing. Making Pen-Based Interaction Intelligent and Natural ( 2004 ), 29--34. Sonya Cates and Randall Davis. 2004. New approach to early sketch processing. Making Pen-Based Interaction Intelligent and Natural (2004), 29--34."},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/2380116.2380171"},{"key":"e_1_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025753"},{"key":"e_1_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/VR.2019.8798074"},{"key":"e_1_2_2_34_1","first-page":"379","article-title":"Description and reproduction of stylized traditional dance body motion by using labanotation","volume":"15","author":"Choensawat Worawat","year":"2010","unstructured":"Worawat Choensawat , Sachie Takahashi , Minako Nakamura , Woong Choi , and Kozaburo Hachimura . 2010 . Description and reproduction of stylized traditional dance body motion by using labanotation . Transactions of the Virtual Reality Society of Japan 15 , 3 (2010), 379 -- 388 . Worawat Choensawat, Sachie Takahashi, Minako Nakamura, Woong Choi, and Kozaburo Hachimura. 2010. Description and reproduction of stylized traditional dance body motion by using labanotation. Transactions of the Virtual Reality Society of Japan 15, 3 (2010), 379--388.","journal-title":"Transactions of the Virtual Reality Society of Japan"},{"key":"e_1_2_2_35_1","volume-title":"SnapMove: Movement Projection Mapping in Virtual Reality. In AIVR","author":"Cohn Brian J","year":"2020","unstructured":"Brian J Cohn , Antonella Maselli , Eyal Ofek , and Mar Gonzalez Franco . 2020 . SnapMove: Movement Projection Mapping in Virtual Reality. In AIVR 2020. IEEE. Brian J Cohn, Antonella Maselli, Eyal Ofek, and Mar Gonzalez Franco. 2020. SnapMove: Movement Projection Mapping in Virtual Reality. In AIVR 2020. IEEE."},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/1399504.1360681"},{"key":"e_1_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.1037\/xap0000192"},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300599"},{"key":"e_1_2_2_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3242587.3242594"},{"key":"e_1_2_2_40_1","volume-title":"Philosophical conceptions of the self: implications for cognitive science. Trends in cognitive sciences 4, 1","author":"Gallagher Shaun","year":"2000","unstructured":"Shaun Gallagher . 2000. Philosophical conceptions of the self: implications for cognitive science. Trends in cognitive sciences 4, 1 ( 2000 ), 14--21. Shaun Gallagher. 2000. Philosophical conceptions of the self: implications for cognitive science. Trends in cognitive sciences 4, 1 (2000), 14--21."},{"key":"e_1_2_2_41_1","doi-asserted-by":"publisher","DOI":"10.1109\/ROMAN.2010.5598733"},{"key":"e_1_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1109\/VR46266.2020.00019"},{"key":"e_1_2_2_43_1","volume-title":"Peck","author":"Gonzalez-Franco Mar","year":"2018","unstructured":"Mar Gonzalez-Franco and Tabitha C . Peck . 2018 . Avatar Embodiment. Towards a Standardized Questionnaire. Frontiers in Robotics and AI 5 (June 2018), 74. https:\/\/doi.org\/10.3389\/frobt.2018.00074 10.3389\/frobt.2018.00074 Mar Gonzalez-Franco and Tabitha C. Peck. 2018. Avatar Embodiment. Towards a Standardized Questionnaire. Frontiers in Robotics and AI 5 (June 2018), 74. https:\/\/doi.org\/10.3389\/frobt.2018.00074"},{"key":"e_1_2_2_44_1","volume-title":"The contribution of real-time mirror reflections of motor actions on virtual body ownership in an immersive virtual environment. In 2010 IEEE virtual reality conference (VR)","author":"Gonzalez-Franco Mar","unstructured":"Mar Gonzalez-Franco , Daniel Perez-Marcos , Bernhard Spanlang , and Mel Slater . 2010. The contribution of real-time mirror reflections of motor actions on virtual body ownership in an immersive virtual environment. In 2010 IEEE virtual reality conference (VR) . IEEE , 111--114. Mar Gonzalez-Franco, Daniel Perez-Marcos, Bernhard Spanlang, and Mel Slater. 2010. The contribution of real-time mirror reflections of motor actions on virtual body ownership in an immersive virtual environment. In 2010 IEEE virtual reality conference (VR). IEEE, 111--114."},{"key":"e_1_2_2_45_1","doi-asserted-by":"publisher","DOI":"10.3389\/frobt.2017.00033"},{"key":"e_1_2_2_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/2019406.2019424"},{"key":"e_1_2_2_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/3DUI.2017.7893357"},{"key":"e_1_2_2_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/MCG.2017.3271464"},{"key":"e_1_2_2_49_1","volume-title":"Modifying the visual animation of self-avatar can simulate the perception of weight lifting","author":"Gomez Jauregui David Antonio","year":"2014","unstructured":"David Antonio Gomez Jauregui , Ferran Argelaguet , Anne-Helene Olivier , Maud Marchal , Franck Multon , and Anatole Lecuyer . 2014. Toward\" pseudo-haptic avatars\" : Modifying the visual animation of self-avatar can simulate the perception of weight lifting . IEEE transactions on visualization and computer graphics 20, 4 ( 2014 ), 654--661. David Antonio Gomez Jauregui, Ferran Argelaguet, Anne-Helene Olivier, Maud Marchal, Franck Multon, and Anatole Lecuyer. 2014. Toward\" pseudo-haptic avatars\": Modifying the visual animation of self-avatar can simulate the perception of weight lifting. IEEE transactions on visualization and computer graphics 20, 4 (2014), 654--661."},{"key":"e_1_2_2_50_1","doi-asserted-by":"crossref","unstructured":"Marc Jeannerod. 2002. The mechanism of self-recognition in humans. 15 pages. Marc Jeannerod. 2002. The mechanism of self-recognition in humans. 15 pages.","DOI":"10.1016\/S0166-4328(02)00384-4"},{"key":"e_1_2_2_51_1","doi-asserted-by":"publisher","DOI":"10.1145\/3013971.3013987"},{"key":"e_1_2_2_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025962"},{"key":"e_1_2_2_53_1","volume-title":"Proceedings of the International Conference on Computer Vision (ICCV","author":"Kendall Grimes G.","year":"2015","unstructured":"Grimes G. Kendall , A. and R. Cipolla . 2015. PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization .. In Proceedings of the International Conference on Computer Vision (ICCV 2015 ). Grimes G. Kendall, A. and R. Cipolla. 2015. PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization.. In Proceedings of the International Conference on Computer Vision (ICCV 2015)."},{"key":"e_1_2_2_54_1","doi-asserted-by":"publisher","DOI":"10.1162\/PRES_a_00124"},{"key":"e_1_2_2_55_1","doi-asserted-by":"publisher","DOI":"10.1068\/p7545"},{"key":"e_1_2_2_56_1","first-page":"3","article-title":"The effects of visuomotor calibration to the perceived space and body, through embodiment in immersive virtual reality","volume":"13","author":"Kokkinara Elena","year":"2015","unstructured":"Elena Kokkinara , Mel Slater , and Joan L\u00f3pez-Moliner . 2015 . The effects of visuomotor calibration to the perceived space and body, through embodiment in immersive virtual reality . ACM Transactions on Applied Perception (TAP) 13 , 1 (2015), 3 . Elena Kokkinara, Mel Slater, and Joan L\u00f3pez-Moliner. 2015. The effects of visuomotor calibration to the perceived space and body, through embodiment in immersive virtual reality. ACM Transactions on Applied Perception (TAP) 13, 1 (2015), 3.","journal-title":"ACM Transactions on Applied Perception (TAP)"},{"key":"e_1_2_2_57_1","doi-asserted-by":"publisher","DOI":"10.1145\/3322276.3322352"},{"key":"e_1_2_2_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/1401132.1401202"},{"key":"e_1_2_2_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/1944745.1944768"},{"key":"e_1_2_2_60_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cognition.2007.12.004"},{"key":"e_1_2_2_61_1","unstructured":"Paterson M. H. Ma Y. and E. Pollick. 2006. A motion capture library for the study of identity gender and emotion perception from biological motion. 38 (2006) 7291----7299. Paterson M. H. Ma Y. and E. Pollick. 2006. A motion capture library for the study of identity gender and emotion perception from biological motion. 38 (2006) 7291----7299."},{"key":"e_1_2_2_62_1","volume-title":"The building blocks of the full body ownership illusion. Frontiers in human neuroscience 7","author":"Maselli Antonella","year":"2013","unstructured":"Antonella Maselli and Mel Slater . 2013. The building blocks of the full body ownership illusion. Frontiers in human neuroscience 7 ( 2013 ), 83. Antonella Maselli and Mel Slater. 2013. The building blocks of the full body ownership illusion. Frontiers in human neuroscience 7 (2013), 83."},{"key":"e_1_2_2_63_1","volume-title":"Sliding perspectives: dissociating ownership from self-location during full body illusions in virtual reality. Frontiers in human neuroscience 8","author":"Maselli Antonella","year":"2014","unstructured":"Antonella Maselli and Mel Slater . 2014. Sliding perspectives: dissociating ownership from self-location during full body illusions in virtual reality. Frontiers in human neuroscience 8 ( 2014 ), 693. Antonella Maselli and Mel Slater. 2014. Sliding perspectives: dissociating ownership from self-location during full body illusions in virtual reality. Frontiers in human neuroscience 8 (2014), 693."},{"key":"e_1_2_2_64_1","unstructured":"Alberto Menache. 2000. Understanding motion capture for computer animation and video games. Morgan kaufmann. Alberto Menache. 2000. Understanding motion capture for computer animation and video games. Morgan kaufmann."},{"key":"e_1_2_2_65_1","unstructured":"Microsoft Corporation. 2010. Microsoft Kinect Games. Retrieved 2021 from https:\/\/en.wikipedia.org\/wiki\/Category:Kinect_games Microsoft Corporation. 2010. Microsoft Kinect Games. Retrieved 2021 from https:\/\/en.wikipedia.org\/wiki\/Category:Kinect_games"},{"key":"e_1_2_2_66_1","doi-asserted-by":"publisher","DOI":"10.1162\/pres.17.1.17"},{"key":"e_1_2_2_67_1","volume-title":"Thi Duyen Ngo, and Thanh Ha Le","author":"Nguyen Xuan Thanh","year":"2019","unstructured":"Xuan Thanh Nguyen , Thi Duyen Ngo, and Thanh Ha Le . 2019 . A Spatial-temporal 3D Human Pose Reconstruction Framework . arXiv preprint arXiv:1901.02529 (2019). Xuan Thanh Nguyen, Thi Duyen Ngo, and Thanh Ha Le. 2019. A Spatial-temporal 3D Human Pose Reconstruction Framework. arXiv preprint arXiv:1901.02529 (2019)."},{"key":"e_1_2_2_68_1","unstructured":"TechCrunch Oculus. 2020. TechCrunch - Oculus. Retrieved 2020 from https:\/\/techcrunch.com\/2016\/10\/06\/facebook-social-vr\/ TechCrunch Oculus. 2020. TechCrunch - Oculus. Retrieved 2020 from https:\/\/techcrunch.com\/2016\/10\/06\/facebook-social-vr\/"},{"key":"e_1_2_2_69_1","doi-asserted-by":"publisher","DOI":"10.1145\/3281505.3281529"},{"key":"e_1_2_2_70_1","first-page":"44","article-title":"Avatar embodiment. a standardized questionnaire","volume":"1","author":"Peck Tabitha C","year":"2020","unstructured":"Tabitha C Peck and Mar Gonzalez-Franco . 2020 . Avatar embodiment. a standardized questionnaire . Frontiers in Virtual Reality 1 (2020), 44 . Tabitha C Peck and Mar Gonzalez-Franco. 2020. Avatar embodiment. a standardized questionnaire. Frontiers in Virtual Reality 1 (2020), 44.","journal-title":"Frontiers in Virtual Reality"},{"key":"e_1_2_2_71_1","doi-asserted-by":"publisher","DOI":"10.1145\/237091.237102"},{"key":"e_1_2_2_72_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300644"},{"key":"e_1_2_2_73_1","volume-title":"SurfaceBrush: from virtual reality drawings to manifold surfaces. arXiv preprint arXiv:1904.12297","author":"Rosales Enrique","year":"2019","unstructured":"Enrique Rosales , Jafet Rodriguez , and Alla Sheffer . 2019. SurfaceBrush: from virtual reality drawings to manifold surfaces. arXiv preprint arXiv:1904.12297 ( 2019 ). Enrique Rosales, Jafet Rodriguez, and Alla Sheffer. 2019. SurfaceBrush: from virtual reality drawings to manifold surfaces. arXiv preprint arXiv:1904.12297 (2019)."},{"key":"e_1_2_2_74_1","doi-asserted-by":"publisher","DOI":"10.1109\/38.708559"},{"key":"e_1_2_2_75_1","volume-title":"Pearson's correlation coefficient. Bmj 345","author":"Sedgwick Philip","year":"2012","unstructured":"Philip Sedgwick . 2012. Pearson's correlation coefficient. Bmj 345 ( 2012 ). Philip Sedgwick. 2012. Pearson's correlation coefficient. Bmj 345 (2012)."},{"key":"e_1_2_2_76_1","volume-title":"SIGGRAPH","author":"Park H. S.","year":"2011","unstructured":"Park H. S. Sigal Y. Leonid abd Sheikh Shiratori, T. and K. J. Hodgins. 2011. Motion capture from body-mounted cameras .. In SIGGRAPH 2011 . 7291----7299. Park H. S. Sigal Y. Leonid abd Sheikh Shiratori, T. and K. J. Hodgins. 2011. Motion capture from body-mounted cameras.. In SIGGRAPH 2011. 7291----7299."},{"key":"e_1_2_2_77_1","doi-asserted-by":"crossref","unstructured":"m. Slater. 2017. Implicit Learning Through Embodiment in Immersive Virtual Reality.. In In D. Liu D. Dede C. Huang R. and Richards J. eds. Virtual Augmented and Mixed Realities in Education. 19--34. m. Slater. 2017. Implicit Learning Through Embodiment in Immersive Virtual Reality.. In In D. Liu D. Dede C. Huang R. and Richards J. eds. Virtual Augmented and Mixed Realities in Education. 19--34.","DOI":"10.1007\/978-981-10-5490-7_2"},{"key":"e_1_2_2_78_1","doi-asserted-by":"publisher","DOI":"10.1145\/2677199.2680548"},{"key":"e_1_2_2_79_1","doi-asserted-by":"publisher","DOI":"10.3389\/frobt.2014.00009"},{"key":"e_1_2_2_80_1","volume-title":"Virtual Reality Based Immersive Telepresence System for Remote Conversation and Collaboration. In International Workshop on Next Generation Computer Animation Techniques. Springer, 234--247","author":"Tan Zhipeng","year":"2017","unstructured":"Zhipeng Tan , Yuning Hu , and Kun Xu . 2017 . Virtual Reality Based Immersive Telepresence System for Remote Conversation and Collaboration. In International Workshop on Next Generation Computer Animation Techniques. Springer, 234--247 . Zhipeng Tan, Yuning Hu, and Kun Xu. 2017. Virtual Reality Based Immersive Telepresence System for Remote Conversation and Collaboration. In International Workshop on Next Generation Computer Animation Techniques. Springer, 234--247."},{"key":"e_1_2_2_81_1","doi-asserted-by":"publisher","DOI":"10.1037\/0096-1523.31.1.80"},{"key":"e_1_2_2_82_1","volume-title":"Design and Implementation of Humanoid Robot Behavior Imitation System Based on Skeleton Tracking. In 2019 Chinese Control And Decision Conference (CCDC). IEEE, 3541--3546","author":"Wang Zhifu","year":"2019","unstructured":"Zhifu Wang , Xianfeng Yuan , and Chengjin Zhang . 2019 . Design and Implementation of Humanoid Robot Behavior Imitation System Based on Skeleton Tracking. In 2019 Chinese Control And Decision Conference (CCDC). IEEE, 3541--3546 . Zhifu Wang, Xianfeng Yuan, and Chengjin Zhang. 2019. Design and Implementation of Humanoid Robot Behavior Imitation System Based on Skeleton Tracking. In 2019 Chinese Control And Decision Conference (CCDC). IEEE, 3541--3546."},{"key":"e_1_2_2_83_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376687"},{"key":"e_1_2_2_84_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376687"},{"key":"e_1_2_2_85_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766999"},{"key":"e_1_2_2_86_1","doi-asserted-by":"publisher","DOI":"10.1145\/3332165.3347875"},{"key":"e_1_2_2_87_1","first-page":"137","article-title":"Spectral style transfer for human motion between independent actions","volume":"35","author":"Ersin Yumer M","year":"2016","unstructured":"M Ersin Yumer and Niloy J Mitra . 2016 . Spectral style transfer for human motion between independent actions . ACM Transactions on Graphics (TOG) 35 , 4 (2016), 137 . M Ersin Yumer and Niloy J Mitra. 2016. Spectral style transfer for human motion between independent actions. ACM Transactions on Graphics (TOG) 35, 4 (2016), 137.","journal-title":"ACM Transactions on Graphics (TOG)"}],"container-title":["Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3463499","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3463499","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T21:31:28Z","timestamp":1750195888000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3463499"}},"subtitle":["User Motion Accentuation in Virtual Reality"],"short-title":[],"issued":{"date-parts":[[2021,6,23]]},"references-count":87,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2021,6,23]]}},"alternative-id":["10.1145\/3463499"],"URL":"https:\/\/doi.org\/10.1145\/3463499","relation":{},"ISSN":["2474-9567"],"issn-type":[{"value":"2474-9567","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,6,23]]},"assertion":[{"value":"2021-06-24","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}