{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,13]],"date-time":"2026-03-13T14:56:49Z","timestamp":1773413809039,"version":"3.50.1"},"reference-count":92,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2024,1,16]],"date-time":"2024-01-16T00:00:00Z","timestamp":1705363200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001691","name":"JSPS KAKENHI","doi-asserted-by":"publisher","award":["JP22H00545"],"award-info":[{"award-number":["JP22H00545"]}],"id":[{"id":"10.13039\/501100001691","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001691","name":"JSPS KAKENHI","doi-asserted-by":"publisher","award":["JPNP20006"],"award-info":[{"award-number":["JPNP20006"]}],"id":[{"id":"10.13039\/501100001691","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001863","name":"NEDO","doi-asserted-by":"publisher","award":["JP22H00545"],"award-info":[{"award-number":["JP22H00545"]}],"id":[{"id":"10.13039\/501100001863","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001863","name":"NEDO","doi-asserted-by":"publisher","award":["JPNP20006"],"award-info":[{"award-number":["JPNP20006"]}],"id":[{"id":"10.13039\/501100001863","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>This paper presents a model for generating expressive robot motions based on human expressive movements. The proposed data-driven approach combines variational autoencoders and a generative adversarial network framework to extract the essential features of human expressive motion and generate expressive robot motion accordingly. The primary objective was to transfer the underlying expressive features from human to robot motion. The input to the model consists of the robot task defined by the robot\u2019s linear velocities and angular velocities and the expressive data defined by the movement of a human body part, represented by the acceleration and angular velocity. The experimental results show that the model can effectively recognize and transfer expressive cues to the robot, producing new movements that incorporate the expressive qualities derived from the human input. Furthermore, the generated motions exhibited variability with different human inputs, highlighting the ability of the model to produce diverse outputs.<\/jats:p>","DOI":"10.3390\/s24020569","type":"journal-article","created":{"date-parts":[[2024,1,16]],"date-time":"2024-01-16T11:37:18Z","timestamp":1705405038000},"page":"569","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":12,"title":["A Generative Model to Embed Human Expressivity into Robot Motions"],"prefix":"10.3390","volume":"24","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5690-3768","authenticated-orcid":false,"given":"Pablo","family":"Osorio","sequence":"first","affiliation":[{"name":"Department of Mechanical Systems Engineering, Faculty of Engineering, Tokyo University of Agriculture and Technology, Koganei Campus, Tokyo 184-8588, Japan"},{"name":"CNRS-AIST JRL (Joint Robotics Laboratory) IRL, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6778-8838","authenticated-orcid":false,"given":"Ryusuke","family":"Sagawa","sequence":"additional","affiliation":[{"name":"Department of Mechanical Systems Engineering, Faculty of Engineering, Tokyo University of Agriculture and Technology, Koganei Campus, Tokyo 184-8588, Japan"},{"name":"CNRS-AIST JRL (Joint Robotics Laboratory) IRL, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3635-8640","authenticated-orcid":false,"given":"Naoko","family":"Abe","sequence":"additional","affiliation":[{"name":"Naver Labs Europe, 38240 Meylan, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7767-4765","authenticated-orcid":false,"given":"Gentiane","family":"Venture","sequence":"additional","affiliation":[{"name":"CNRS-AIST JRL (Joint Robotics Laboratory) IRL, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan"},{"name":"Department of Mechanical Engineering, Graduate School of Engineering, The University of Tokyo, Hongo Campus, Tokyo 113-8654, Japan"}]}],"member":"1968","published-online":{"date-parts":[[2024,1,16]]},"reference":[{"key":"ref_1","unstructured":"Bartra, R. (2019). Chamanes y Robots, Anagrama."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Mancini, C. (May, January 27). Animal-Computer Interaction (ACI): Changing perspective on HCI, participation and sustainability. Proceedings of the 2013 Conference on Human Factors in Computing Systems CHI 2013, Paris, France.","DOI":"10.1145\/2468356.2468744"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"eabm4183","DOI":"10.1126\/scirobotics.abm4183","article-title":"In situ bidirectional human-robot value alignment","volume":"7","author":"Yuan","year":"2022","journal-title":"Sci. Robot."},{"key":"ref_4","first-page":"8","article-title":"Designing personas for expressive robots: Personality in the new breed of moving, speaking, and colorful social home robots","volume":"10","author":"Whittaker","year":"2021","journal-title":"ACM Trans. Hum. Robot Interact. (THRI)"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Ceha, J., Chhibber, N., Goh, J., McDonald, C., Oudeyer, P.Y., Kuli\u0107, D., and Law, E. (2019, January 4\u20139). Expression of Curiosity in Social Robots: Design, Perception, and Effects on Behaviour. Proceedings of the 2019 Conference on Human Factors in Computing Systems (CHI\u201919), Glasgow, Scotland.","DOI":"10.1145\/3290605.3300636"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Ostrowski, A.K., Zygouras, V., Park, H.W., and Breazeal, C. (2021, January 9\u201311). Small Group Interactions with Voice-User Interfaces: Exploring Social Embodiment, Rapport, and Engagement. Proceedings of the 2021 ACM\/IEEE International Conference on Human-Robot Interaction (HRI\u201921), Boulder, CO, USA.","DOI":"10.1145\/3434073.3444655"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Erel, H., Cohen, Y., Shafrir, K., Levy, S.D., Vidra, I.D., Shem Tov, T., and Zuckerman, O. (2021, January 9\u201311). Excluded by robots: Can robot-robot-human interaction lead to ostracism?. Proceedings of the 2021 ACM\/IEEE International Conference on Human-Robot Interaction (HRI\u201921), Boulder, CO, USA.","DOI":"10.1145\/3434073.3444648"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Brock, H., \u0160abanovi\u0107, S., and Gomez, R. (2021, January 9\u201311). Remote You, Haru and Me: Exploring Social Interaction in Telepresence Gaming With a Robotic Agent. Proceedings of the 2021 ACM\/IEEE International Conference on Human-Robot Interaction (HRI\u201921), Boulder, CO, USA.","DOI":"10.1145\/3434074.3447177"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"27","DOI":"10.1007\/s43154-020-00005-6","article-title":"Review of interfaces for industrial human-robot interaction","volume":"1","author":"Berg","year":"2020","journal-title":"Curr. Robot. Rep."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"347","DOI":"10.1007\/s12369-014-0267-6","article-title":"Anthropomorphism: Opportunities and challenges in human\u2013robot interaction","volume":"7","author":"Proudfoot","year":"2015","journal-title":"Int. J. Soc. Robot."},{"key":"ref_11","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","author":"Brown","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Zhang, C., Chen, J., Li, J., Peng, Y., and Mao, Z. (2023). Large language models for human-robot interaction: A review. Biomim. Intell. Robot., 3.","DOI":"10.1016\/j.birob.2023.100131"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Capy, S., Osorio, P., Hagane, S., Aznar, C., Garcin, D., Coronado, E., Deuff, D., Ocnarescu, I., Milleville, I., and Venture, G. (2022). Y\u014dkobo: A Robot to Strengthen Links Amongst Users with Non-Verbal Behaviours. Machines, 10.","DOI":"10.3390\/machines10080708"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Szafir, D., Mutlu, B., and Fong, T. (2014, January 3\u20136). Communication of intent in assistive free flyers. Proceedings of the 2014 ACM\/IEEE International Conference on Human-Robot interaction (HRI\u201914), Bielefeld, Germany.","DOI":"10.1145\/2559636.2559672"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Terzio\u011flu, Y., Mutlu, B., and \u015eahin, E. (2020, January 23\u201326). Designing Social Cues for Collaborative Robots: The RoIe of Gaze and Breathing in Human-Robot Collaboration. Proceedings of the 2020 ACM\/IEEE International Conference on Human-Robot Interaction (HRI) (HRI\u201920), Cambridge, UK.","DOI":"10.1145\/3319502.3374829"},{"key":"ref_16","unstructured":"Reed, S., Zolna, K., Parisotto, E., Colmenarejo, S.G., Novikov, A., Barth-maron, G., Gim\u00e9nez, M., Sulsky, Y., Kay, J., and Springenberg, J.T. (2022). A Generalist Agent. arXiv."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"65","DOI":"10.3366\/drs.2014.0087","article-title":"Is dance a language? Movement, meaning and communication","volume":"32","author":"Bannerman","year":"2014","journal-title":"Danc. Res."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"763","DOI":"10.1016\/j.neuropsychologia.2009.10.029","article-title":"Embodied cognition and beyond: Acting and sensing the body","volume":"48","author":"Borghi","year":"2010","journal-title":"Neuropsychologia"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"341","DOI":"10.1109\/T-AFFC.2013.29","article-title":"Body movements for affective expression: A survey of automatic recognition and generation","volume":"4","author":"Karg","year":"2013","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_20","first-page":"20","article-title":"Robot expressive motions: A survey of generation and evaluation methods","volume":"8","author":"Venture","year":"2019","journal-title":"ACM Trans. Hum. Robot Interact. THRI"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Sreedharan, S., Kulkarni, A., Chakraborti, T., Zhuo, H.H., and Kambhampati, S. (June, January 29). Plan explicability and predictability for robot task planning. Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore.","DOI":"10.1109\/ICRA.2017.7989155"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"254","DOI":"10.1109\/THMS.2019.2925717","article-title":"Agent transparency and reliability in human\u2013robot interaction: The influence on user confidence and perceived reliability","volume":"50","author":"Wright","year":"2019","journal-title":"IEEE Trans. Hum. Mach. Syst."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Dragan, A.D., Lee, K.C., and Srinivasa, S.S. (2013, January 3\u20136). Legibility and predictability of robot motion. Proceedings of the 2013 ACM\/IEEE International Conference on Human-Robot Interaction (HRI\u201913), Tokyo, Japan.","DOI":"10.1109\/HRI.2013.6483603"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Sripathy, A., Bobu, A., Li, Z., Sreenath, K., Brown, D.S., and Dragan, A.D. (2022, January 23\u201327). Teaching robots to span the space of functional expressive motion. Proceedings of the 2022 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan.","DOI":"10.1109\/IROS47612.2022.9981964"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Knight, H., and Simmons, R. (2014, January 25\u201329). Expressive motion with x, y and theta: Laban effort features for mobile robots. Proceedings of the Proceeding of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK.","DOI":"10.1109\/ROMAN.2014.6926264"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Bobu, A., Wiggert, M., Tomlin, C., and Dragan, A.D. (2021, January 9\u201311). Feature Expansive Reward Learning: Rethinking Human Input. Proceedings of the 2021 ACM\/IEEE International Conference on Human-Robot Interaction (HRI\u201921), Boulder, CO, USA.","DOI":"10.1145\/3434073.3444667"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Chidambaram, V., Chiang, Y.H., and Mutlu, B. (2012, January 5\u20138). Designing persuasive robots: How robots might persuade people using vocal and nonverbal cues. Proceedings of the 2012 ACM\/IEEE International Conference on Human-Robot Interaction (HRI\u201912), Boston, MA, USA.","DOI":"10.1145\/2157689.2157798"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"575","DOI":"10.1007\/s12369-019-00523-0","article-title":"How robots influence humans: A survey of nonverbal communication in social human\u2013robot interaction","volume":"11","author":"Saunderson","year":"2019","journal-title":"Int. J. Soc. Robot."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"9687","DOI":"10.1038\/s41598-021-88622-9","article-title":"Promises and trust in human\u2013robot interaction","volume":"11","author":"Cominelli","year":"2021","journal-title":"Sci. Rep."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Desai, R., Anderson, F., Matejka, J., Coros, S., McCann, J., Fitzmaurice, G., and Grossman, T. (2019, January 4\u20139). Geppetto: Enabling semantic design of expressive robot behaviors. Proceedings of the 2019 Conference on Human Factors in Computing Systems (CHI\u201919\u2019), Glasgow, Scotland.","DOI":"10.1145\/3290605.3300599"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"eabo1241","DOI":"10.1126\/scirobotics.abo1241","article-title":"Human-like behavioral variability blurs the distinction between a human and a machine in a nonverbal Turing test","volume":"7","author":"Ciardo","year":"2022","journal-title":"Sci. Robot."},{"key":"ref_32","first-page":"27","article-title":"Explainable embodied agents through social cues: A review","volume":"10","author":"Tulli","year":"2021","journal-title":"ACM Trans. Hum. Robot Interact. (THRI)"},{"key":"ref_33","unstructured":"Herrera Perez, C., and Barakova, E.I. (2020). Modelling Human Motion: From Human Perception to Robot Design, Springer International Publishing."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"102432","DOI":"10.1016\/j.rcim.2022.102432","article-title":"Human\u2013robot collaboration and machine learning: A systematic review of recent research","volume":"79","author":"Semeraro","year":"2023","journal-title":"Robot. Comput. Integr. Manuf."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Bruns, M., Ossevoort, S., and Petersen, M.G. (2021, January 8\u201313). Expressivity in interaction: A framework for design. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan.","DOI":"10.1145\/3411764.3445231"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Larboulette, C., and Gibet, S. (2015, January 14\u201315). A Review of Computable Expressive Descriptors of Human Motion. Proceedings of the 2nd International Workshop on Movement and Computing (MOCO\u201915), Vancouver, BC, Canada.","DOI":"10.1145\/2790994.2790998"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"630","DOI":"10.1016\/j.specom.2008.04.009","article-title":"Studies on gesture expressivity for a virtual agent","volume":"51","author":"Pelachaud","year":"2009","journal-title":"Speech Commun."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"879","DOI":"10.1002\/(SICI)1099-0992(1998110)28:6<879::AID-EJSP901>3.0.CO;2-W","article-title":"Bodily expression of emotion","volume":"28","author":"Wallbott","year":"1998","journal-title":"Eur. J. Soc. Psychol."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Davies, E. (2007). Beyond Dance: Laban\u2019s Legacy of Movement Analysis, Routledge.","DOI":"10.4324\/9780203960066"},{"key":"ref_40","unstructured":"Burton, S.J., Samadani, A.A., Gorbet, R., and Kuli\u0107, D. (2016). Dance Notations and Robot Motion, Springer International Publishing."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"1047","DOI":"10.1007\/s12369-020-00695-0","article-title":"Character Synthesis of Ballet Archetypes on Robots Using Laban Movement Analysis: Comparison Between a Humanoid and an Aerial Robot Platform with Lay and Expert Observation","volume":"13","author":"Bacula","year":"2021","journal-title":"Int. J. Soc. Robot."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"104178","DOI":"10.1016\/j.engappai.2021.104178","article-title":"Emotion space modelling for social robots","volume":"100","author":"Yan","year":"2021","journal-title":"Eng. Appl. Artif. Intell."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"277","DOI":"10.1007\/s12369-016-0387-2","article-title":"Exploiting the robot kinematic redundancy for emotion conveyance to humans as a lower priority task","volume":"9","author":"Claret","year":"2017","journal-title":"Int. J. Soc. Robot."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"H\u00e4ring, M., Bee, N., and Andr\u00e9, E. (August, January 31). Creation and evaluation of emotion expression with body movement, sound and eye color for humanoid robots. Proceedings of the 2011 IEEE RO-MAN: International Symposium on Robot and Human Interactive Communication, Atlanta, GA, USA.","DOI":"10.1109\/ROMAN.2011.6005263"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Embgen, S., Luber, M., Becker-Asano, C., Ragni, M., Evers, V., and Arras, K.O. (2012, January 9\u201312). Robot-specific social cues in emotional body language. Proceedings of the 2012 IEEE RO-MAN: IEEE International Symposium on Robot and Human Interactive Communication, Paris, France.","DOI":"10.1109\/ROMAN.2012.6343883"},{"key":"ref_46","first-page":"2","article-title":"Emotional body language displayed by artificial agents","volume":"2","author":"Beck","year":"2012","journal-title":"ACM Trans. Interact. Intell. Syst. (TiiS)"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.ijhcs.2015.01.006","article-title":"Emotionally expressive dynamic physical behaviors in robots","volume":"78","author":"Bretan","year":"2015","journal-title":"Int. J. Hum.-Comput. Stud."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Dairi, A., Harrou, F., Sun, Y., and Khadraoui, S. (2020). Short-term forecasting of photovoltaic solar power production using variational auto-encoder driven deep learning approach. Appl. Sci., 10.","DOI":"10.3390\/app10238400"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Li, Z., Zhao, Y., Han, J., Su, Y., Jiao, R., Wen, X., and Pei, D. (2021, January 14\u201318). Multivariate time series anomaly detection and interpretation using hierarchical inter-metric and temporal embedding. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore.","DOI":"10.1145\/3447548.3467075"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Memarzadeh, M., Matthews, B., and Avrekh, I. (2020). Unsupervised anomaly detection in flight data using convolutional variational auto-encoder. Aerospace, 7.","DOI":"10.3390\/aerospace7080115"},{"key":"ref_51","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141., and Polosukhin, I. (2017, January 4\u20139). Attention is all you need. Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., and Gao, W. (2021, January 20\u201325). Pre-trained image processing transformer. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Virtual.","DOI":"10.1109\/CVPR46437.2021.01212"},{"key":"ref_53","unstructured":"Lu, J., Yang, J., Batra, D., and Parikh, D. (2016, January 5\u201310). Hierarchical question-image co-attention for visual question answering. Proceedings of the 30th Annual Conference on Neural Information Processing Systems (NIPS), Barcelona, Spain."},{"key":"ref_54","unstructured":"Choi, K., Hawthorne, C., Simon, I., Dinculescu, M., and Engel, J. (2020, January 13\u201318). Encoding musical style with transformer autoencoders. Proceedings of the International Conference on Machine Learning, Virtual."},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"2407","DOI":"10.1109\/LRA.2019.2901898","article-title":"Robot motion planning in learned latent spaces","volume":"4","author":"Ichter","year":"2019","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"1544","DOI":"10.1109\/LRA.2018.2801475","article-title":"A multimodal anomaly detector for robot-assisted feeding using an lstm-based variational autoencoder","volume":"3","author":"Park","year":"2018","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_57","first-page":"8320","article-title":"Learning signal-agnostic manifolds of neural fields","volume":"34","author":"Du","year":"2021","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"222","DOI":"10.1145\/3414685.3417838","article-title":"Speech gesture generation from the trimodal context of text, audio, and speaker identity","volume":"39","author":"Yoon","year":"2020","journal-title":"ACM Trans. Graph. (TOG)"},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Cudeiro, D., Bolkart, T., Laidlaw, C., Ranjan, A., and Black, M.J. (2019, January 15\u201320). Capture, learning, and synthesis of 3D speaking styles. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.01034"},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Ahuja, C., Lee, D.W., and Morency, L.P. (2022, January 19\u201324). Low-resource adaptation for personalized co-speech gesture generation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01991"},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Ferstl, Y., Neff, M., and McDonnell, R. (2019, January 28\u201330). Multi-objective adversarial gesture generation. Proceedings of the 12th ACM SIGGRAPH Conference on Motion, Interaction and Games, Newcastle upon Tyne, UK.","DOI":"10.1145\/3359566.3360053"},{"key":"ref_62","doi-asserted-by":"crossref","unstructured":"Yoon, Y., Ko, W.R., Jang, M., Lee, J., Kim, J., and Lee, G. (2019, January 20\u201324). Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots. Proceedings of the 2019 International Conference on Robotics and Automation, Montreal, QC, Canada.","DOI":"10.1109\/ICRA.2019.8793720"},{"key":"ref_63","doi-asserted-by":"crossref","unstructured":"Bhattacharya, U., Rewkowski, N., Banerjee, A., Guhan, P., Bera, A., and Manocha, D. (April, January 27). Text2gestures: A transformer-based network for generating emotive body gestures for virtual agents. Proceedings of the 2021 IEEE Virtual Reality and 3D User Interfaces Conference, Virtual.","DOI":"10.1109\/VR50410.2021.00037"},{"key":"ref_64","doi-asserted-by":"crossref","first-page":"497","DOI":"10.1177\/02783649221078031","article-title":"Inducing structure in reward learning by learning features","volume":"41","author":"Bobu","year":"2022","journal-title":"Int. J. Robot. Res."},{"key":"ref_65","doi-asserted-by":"crossref","first-page":"92","DOI":"10.1016\/j.ifacol.2023.01.139","article-title":"Control of a Robot Expressive Movements Using Non-Verbal Features","volume":"55","author":"Osorio","year":"2022","journal-title":"IFAC PapersOnLine"},{"key":"ref_66","doi-asserted-by":"crossref","unstructured":"Penco, L., Cl\u00e9ment, B., Modugno, V., Hoffman, E.M., Nava, G., Pucci, D., Tsagarakis, N.G., Mouret, J.B., and Ivaldi, S. (2018, January 6\u20139). Robust real-time whole-body motion retargeting from human to humanoid. Proceedings of the 2018 IEEE-RAS International Conference on Humanoid Robots (Humanoids), Beijing, China.","DOI":"10.1109\/HUMANOIDS.2018.8624943"},{"key":"ref_67","doi-asserted-by":"crossref","unstructured":"Kim, T., and Lee, J.H. (August, January 31). C-3PO: Cyclic-three-phase optimization for human-robot motion retargeting based on reinforcement learning. Proceedings of the 2020 IEEE International Conference on Robotics and Automation, Virtual.","DOI":"10.1109\/ICRA40945.2020.9196948"},{"key":"ref_68","doi-asserted-by":"crossref","unstructured":"Rakita, D., Mutlu, B., and Gleicher, M. (2017, January 6\u20139). A motion retargeting method for effective mimicry-based teleoperation of robot arms. Proceedings of the 2017 ACM\/IEEE International Conference on Human-Robot Interaction (HRI\u201917), Vienna, Austria.","DOI":"10.1145\/2909824.3020254"},{"key":"ref_69","doi-asserted-by":"crossref","unstructured":"Hagane, S., and Venture, G. (2022). Robotic Manipulator\u2019s Expressive Movements Control Using Kinematic Redundancy. Machines, 10.","DOI":"10.3390\/machines10121118"},{"key":"ref_70","doi-asserted-by":"crossref","unstructured":"Knight, H., and Simmons, R. (2016, January 16\u201321). Laban head-motions convey robot state: A call for robot body language. Proceedings of the 2016 IEEE International Conference on Robotics and Automation, Stockholm, Sweden.","DOI":"10.1109\/ICRA.2016.7487451"},{"key":"ref_71","first-page":"21","article-title":"Generating legible and glanceable swarm robot motion through trajectory, collective behavior, and pre-attentive processing features","volume":"10","author":"Kim","year":"2021","journal-title":"ACM Trans. Hum.-Robot Interact. (THRI)"},{"key":"ref_72","doi-asserted-by":"crossref","unstructured":"Cui, H., Maguire, C., and LaViers, A. (2019). Laban-inspired task-constrained variable motion generation on expressive aerial robots. Robotics, 8.","DOI":"10.3390\/robotics8020024"},{"key":"ref_73","first-page":"19667","article-title":"NVAE: A deep hierarchical variational autoencoder","volume":"33","author":"Vahdat","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_74","doi-asserted-by":"crossref","unstructured":"Ribeiro, P.M.S., Matos, A.C., Santos, P.H., and Cardoso, J.S. (2020). Machine learning improvements to human motion tracking with imus. Sensors, 20.","DOI":"10.3390\/s20216383"},{"key":"ref_75","unstructured":"Loureiro, A. (2013). Effort: L\u2019alternance Dynamique Dans Le Mouvement, Ressouvenances."},{"key":"ref_76","doi-asserted-by":"crossref","unstructured":"Carreno-Medrano, P., Harada, T., Lin, J.F.S., Kuli\u0107, D., and Venture, G. (2019, January 15\u201317). Analysis of affective human motion during functional task performance: An inverse optimal control approach. Proceedings of the 2019 IEEE-RAS International Conference on Humanoid Robots (Humanoids), Toronto, ON, Canada.","DOI":"10.1109\/Humanoids43949.2019.9035007"},{"key":"ref_77","doi-asserted-by":"crossref","first-page":"22445","DOI":"10.1073\/pnas.1906995116","article-title":"Data-driven discovery of coordinates and governing equations","volume":"116","author":"Champion","year":"2019","journal-title":"Proc. Natl. Acad. Sci. USA"},{"key":"ref_78","unstructured":"Yang, D., Hong, S., Jang, Y., Zhao, T., and Lee, H. (2019, January 6\u20139). Diversity-Sensitive Conditional Generative Adversarial Networks. Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA."},{"key":"ref_79","doi-asserted-by":"crossref","first-page":"621","DOI":"10.1007\/s12369-014-0243-1","article-title":"Recognizing emotions conveyed by human gait","volume":"6","author":"Venture","year":"2014","journal-title":"Int. J. Soc. Robot."},{"key":"ref_80","doi-asserted-by":"crossref","first-page":"1346","DOI":"10.1177\/0278364920908331","article-title":"The blackbird uav dataset","volume":"39","author":"Antonini","year":"2020","journal-title":"Int. J. Robot. Res."},{"key":"ref_81","doi-asserted-by":"crossref","unstructured":"Shi, X., Li, D., Zhao, P., Tian, Q., Tian, Y., Long, Q., Zhu, C., Song, J., Qiao, F., and Song, L. (August, January 31). Are We Ready for Service Robots? The OpenLORIS-Scene Datasets for Lifelong SLAM. Proceedings of the 2020 International Conference on Robotics and Automation, Virtual.","DOI":"10.1109\/ICRA40945.2020.9196638"},{"key":"ref_82","unstructured":"Loshchilov, I., and Hutter, F. (2019, January 6\u20139). Decoupled Weight Decay Regularization. Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA."},{"key":"ref_83","unstructured":"Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., and Antiga, L. (2019, January 8\u201314). PyTorch: An Imperative Style, High-Performance Deep Learning Library. Proceedings of the 32th 2019 Conference of Advances in Neural Information Processing Systems, Vancouver, BC, Canada."},{"key":"ref_84","doi-asserted-by":"crossref","first-page":"4773","DOI":"10.1007\/s00521-018-3849-7","article-title":"On the evaluation of generative models in music","volume":"32","author":"Yang","year":"2020","journal-title":"Neural Comput. Appl."},{"key":"ref_85","doi-asserted-by":"crossref","unstructured":"Wang, J., and Dong, Y. (2020). Measurement of text similarity: A survey. Information, 11.","DOI":"10.3390\/info11090421"},{"key":"ref_86","first-page":"2579","article-title":"Visualizing data using t-SNE","volume":"9","author":"Hinton","year":"2008","journal-title":"J. Mach. Learn. Res."},{"key":"ref_87","doi-asserted-by":"crossref","unstructured":"Todorov, E., Erez, T., and Tassa, Y. (2012, January 7\u201312). Mujoco: A physics engine for model-based control. Proceedings of the 2012 IEEE\/RSJ International Conference On Intelligent Robots and Systems, Algarve, Portugal.","DOI":"10.1109\/IROS.2012.6386109"},{"key":"ref_88","doi-asserted-by":"crossref","first-page":"2149","DOI":"10.1109\/IROS.2004.1389727","article-title":"Design and use paradigms for gazebo, an open-source multi-robot simulator","volume":"Volume 3","author":"Koenig","year":"2004","journal-title":"Proceedings of the 2004 IEEE\/RSJ International Conference On Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566)"},{"key":"ref_89","doi-asserted-by":"crossref","first-page":"eabm6074","DOI":"10.1126\/scirobotics.abm6074","article-title":"Robot Operating System 2: Design, architecture, and uses in the wild","volume":"7","author":"Macenski","year":"2022","journal-title":"Sci. Robot."},{"key":"ref_90","doi-asserted-by":"crossref","unstructured":"Corke, P., and Haviland, J. (June, January 30). Not your grandmother\u2019s toolbox\u2013the robotics toolbox reinvented for python. Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi\u2019an, China.","DOI":"10.1109\/ICRA48506.2021.9561366"},{"key":"ref_91","doi-asserted-by":"crossref","unstructured":"Emir, E., and Burns, C.M. (September, January 29). Evaluation of Expressive Motions based on the Framework of Laban Effort Features for Social Attributes of Robots. Proceedings of the 2022 IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy.","DOI":"10.1109\/RO-MAN53752.2022.9900645"},{"key":"ref_92","first-page":"27730","article-title":"Training language models to follow instructions with human feedback","volume":"35","author":"Ouyang","year":"2022","journal-title":"Adv. Neural Inf. Process. Syst."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/24\/2\/569\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T13:48:12Z","timestamp":1760104092000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/24\/2\/569"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,16]]},"references-count":92,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2024,1]]}},"alternative-id":["s24020569"],"URL":"https:\/\/doi.org\/10.3390\/s24020569","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,1,16]]}}}