{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,21]],"date-time":"2026-03-21T02:19:21Z","timestamp":1774059561550,"version":"3.50.1"},"reference-count":34,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2019,7,26]],"date-time":"2019-07-26T00:00:00Z","timestamp":1564099200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Comput. Graph. Interact. Tech."],"published-print":{"date-parts":[[2019,7,26]]},"abstract":"<jats:p>Style is an intrinsic, inescapable part of human motion. It complements the content of motion to convey meaning, mood, and personality. Existing state-of-the-art motion style methods require large quantities of example data and intensive computational resources at runtime. To ensure output quality, such style transfer applications are often run on desktop machine with GPUs and significant memory. In this paper, we present a fast and expressive neural network-based motion style transfer method that generates stylized motion with quality comparable to the state of the art method, but uses much less computational power and a much smaller memory footprint. Our method also allows the output to be adjusted in a latent style space, something not offered in previous approaches. Our style transfer model is implemented using three multi-layered networks: a pose network, a timing network and a foot-contact network. A one-hot style vector serves as an input control knob and determines the stylistic output of these networks. During training, the networks are trained with a large motion capture database containing heterogeneous actions and various styles. Joint information vectors together with one-hot style vectors are extracted from motion data and fed to the networks. Once the network has been trained, the database is no longer needed on the device, thus removing the large memory requirement of previous motion style methods. At runtime, our model takes novel input and allows real-valued numbers to be specified in the style vector, which can be used for interpolation, extrapolation or mixing of styles. With much lower memory and computational requirements, our networks are efficient and fast enough for real-time use on mobile devices. Requiring no information about future states, the style transfer can be performed in an online fashion. We validate our result both quantitatively and perceptually, confirming its effectiveness and improvement over previous approaches.<\/jats:p>","DOI":"10.1145\/3340254","type":"journal-article","created":{"date-parts":[[2019,7,29]],"date-time":"2019-07-29T20:55:51Z","timestamp":1564433751000},"page":"1-17","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":49,"title":["Efficient Neural Networks for Real-time Motion Style Transfer"],"prefix":"10.1145","volume":"2","author":[{"given":"Harrison Jesse","family":"Smith","sequence":"first","affiliation":[{"name":"University of California, Davis, CA, USA"}]},{"given":"Chen","family":"Cao","sequence":"additional","affiliation":[{"name":"Snap Inc. Santa Monica, CA, USA"}]},{"given":"Michael","family":"Neff","sequence":"additional","affiliation":[{"name":"University of California, Davis, Davis, CA, USA"}]},{"given":"Yingying","family":"Wang","sequence":"additional","affiliation":[{"name":"Snap Inc. Santa Monica, CA, USA"}]}],"member":"320","published-online":{"date-parts":[[2019,7,26]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.5555\/241020.241079"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3099564.3099566"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/344779.344865"},{"key":"e_1_2_1_4_1","volume-title":"Eurographics 2019 - Short Papers","author":"Du Han"},{"key":"e_1_2_1_5_1","doi-asserted-by":"crossref","unstructured":"Daniel Holden Ikhsanul Habibie Ikuo Kusajima and Taku Komura. 2017a. Fast neural style transfer for motion data. IEEE computer graphics and applications 37 4 (2017) 42--49.  Daniel Holden Ikhsanul Habibie Ikuo Kusajima and Taku Komura. 2017a. Fast neural style transfer for motion data. IEEE computer graphics and applications 37 4 (2017) 42--49.","DOI":"10.1109\/MCG.2017.3271464"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073663"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925975"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/2820903.2820918"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/2508363.2508367"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/1073204.1073315"},{"key":"e_1_2_1_11_1","volume-title":"Proceedings of the ACM SIGGRAPH\/Eurographics Symposium on Computer Animation (SCA '12)","author":"Kim Yejin","year":"2012"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.5555\/846276.846307"},{"key":"e_1_2_1_13_1","unstructured":"Zimo Li Yi Zhou Shuangjiu Xiao Chong He and Hao Li. 2017. Auto-Conditioned LSTM Network for Extended Complex Human Motion Synthesis. CoRR abs\/1707.05363 (2017). arXiv:1707.05363 http:\/\/arxiv.org\/abs\/1707.05363  Zimo Li Yi Zhou Shuangjiu Xiao Chong He and Hao Li. 2017. Auto-Conditioned LSTM Network for Extended Complex Human Motion Synthesis. CoRR abs\/1707.05363 (2017). arXiv:1707.05363 http:\/\/arxiv.org\/abs\/1707.05363"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3083723"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201315"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.13555"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.bjoms.2007.09.002"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/1394281.1394294"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1007\/BF02295996"},{"key":"e_1_2_1_20_1","doi-asserted-by":"crossref","unstructured":"Michael Neff Yingying Wang Rob Abbott and Marilyn Walker. 2010. Evaluating the Effect of Gesture and Language on Personality Perception in Conversational Agents. In Intelligent Virtual Agents Jan Allbeck Norman Badler Timothy Bickmore Catherine Pelachaud and Alla Safonova (Eds.). Springer Berlin Heidelberg Berlin Heidelberg 222--235.   Michael Neff Yingying Wang Rob Abbott and Marilyn Walker. 2010. Evaluating the Effect of Gesture and Language on Personality Perception in Conversational Agents. In Intelligent Virtual Agents Jan Allbeck Norman Badler Timothy Bickmore Catherine Pelachaud and Alla Safonova (Eds.). Springer Berlin Heidelberg Berlin Heidelberg 222--235.","DOI":"10.1007\/978-3-642-15892-6_24"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/2159616.2159631"},{"key":"e_1_2_1_22_1","unstructured":"Dario Pavllo David Grangier and Michael Auli. 2018. QuaterNet: A Quaternion-based Recurrent Model for Human Motion. CoRR abs\/1805.06485 (2018). arXiv:1805.06485 http:\/\/arxiv.org\/abs\/1805.06485  Dario Pavllo David Grangier and Michael Auli. 2018. QuaterNet: A Quaternion-based Recurrent Model for Human Motion. CoRR abs\/1805.06485 (2018). arXiv:1805.06485 http:\/\/arxiv.org\/abs\/1805.06485"},{"key":"e_1_2_1_23_1","unstructured":"Xue Bin Peng Pieter Abbeel Sergey Levine and Michiel van de Panne. 2018. DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills. CoRR abs\/1804.02717 (2018). arXiv:1804.02717 http:\/\/arxiv.org\/abs\/1804.02717  Xue Bin Peng Pieter Abbeel Sergey Levine and Michiel van de Panne. 2018. DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills. CoRR abs\/1804.02717 (2018). arXiv:1804.02717 http:\/\/arxiv.org\/abs\/1804.02717"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073602"},{"key":"e_1_2_1_25_1","unstructured":"Sebastian Raschka. 2018. Model Evaluation Model Selection and Algorithm Selection in Machine Learning. CoRR abs\/1811.12808 (2018). arXiv:1811.12808 http:\/\/arxiv.org\/abs\/1811.12808  Sebastian Raschka. 2018. Model Evaluation Model Selection and Algorithm Selection in Machine Learning. CoRR abs\/1811.12808 (2018). arXiv:1811.12808 http:\/\/arxiv.org\/abs\/1811.12808"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073697"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/1553374.1553505"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/218380.218419"},{"key":"e_1_2_1_29_1","unstructured":"Ruben Villegas Jimei Yang Duygu Ceylan and Honglak Lee. 2018. Neural Kinematic Networks for Unsupervised Motion Retargetting. CoRR abs\/1804.05653. arXiv:1804.05653 http:\/\/arxiv.org\/abs\/1804.05653  Ruben Villegas Jimei Yang Duygu Ceylan and Honglak Lee. 2018. Neural Kinematic Networks for Unsupervised Motion Retargetting. CoRR abs\/1804.05653. arXiv:1804.05653 http:\/\/arxiv.org\/abs\/1804.05653"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/2822013.2822024"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/2874357"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766999"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925955"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201366"}],"container-title":["Proceedings of the ACM on Computer Graphics and Interactive Techniques"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3340254","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3340254","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T17:49:32Z","timestamp":1750268972000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3340254"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,7,26]]},"references-count":34,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2019,7,26]]}},"alternative-id":["10.1145\/3340254"],"URL":"https:\/\/doi.org\/10.1145\/3340254","relation":{},"ISSN":["2577-6193"],"issn-type":[{"value":"2577-6193","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,7,26]]},"assertion":[{"value":"2019-07-26","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}