{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,2]],"date-time":"2026-04-02T15:52:27Z","timestamp":1775145147166,"version":"3.50.1"},"reference-count":37,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2021,9,22]],"date-time":"2021-09-22T00:00:00Z","timestamp":1632268800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100003725","name":"National Research Foundation, Korea","doi-asserted-by":"crossref","award":["NRF-2020R1A2C2011541"],"award-info":[{"award-number":["NRF-2020R1A2C2011541"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"crossref"}]},{"name":"KEIT, Korea","award":["20011076"],"award-info":[{"award-number":["20011076"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Comput. Graph. Interact. Tech."],"published-print":{"date-parts":[[2021,9,22]]},"abstract":"<jats:p>This paper presents a novel deep learning-based framework for translating a motion into various styles within multiple domains. Our framework is a single set of generative adversarial networks that learns stylistic features from a collection of unpaired motion clips with style labels to support mapping between multiple style domains. We construct a spatio-temporal graph to model a motion sequence and employ the spatial-temporal graph convolution networks (ST-GCN) to extract stylistic properties along spatial and temporal dimensions. Through spatial-temporal modeling, our framework shows improved style translation results between significantly different actions and on a long motion sequence containing multiple actions. In addition, we first develop a mapping network for motion stylization that maps a random noise to style, which allows for generating diverse stylization results without using reference motions. Through various experiments, we demonstrate the ability of our method to generate improved results in terms of visual quality, stylistic diversity, and content preservation.<\/jats:p>","DOI":"10.1145\/3480145","type":"journal-article","created":{"date-parts":[[2021,9,28]],"date-time":"2021-09-28T04:43:36Z","timestamp":1632804216000},"page":"1-17","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":45,"title":["Diverse Motion Stylization for Multiple Style Domains via Spatial-Temporal Graph-Based Generative Model"],"prefix":"10.1145","volume":"4","author":[{"given":"Soomin","family":"Park","sequence":"first","affiliation":[{"name":"KAIST, Daejeon, Korea"}]},{"given":"Deok-Kyeong","family":"Jang","sequence":"additional","affiliation":[{"name":"KAIST, Daejeon, Korea"}]},{"given":"Sung-Hee","family":"Lee","sequence":"additional","affiliation":[{"name":"KAIST, Daejeon, Korea"}]}],"member":"320","published-online":{"date-parts":[[2021,9,27]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/3386569.3392469"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3099564.3099566"},{"key":"e_1_2_2_3_1","volume-title":"Rethinking the truly unsupervised image-to-image translation. arXiv preprint arXiv:2006.06500","author":"Baek Kyungjune","year":"2020"},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.3390\/computers10030038"},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00821"},{"key":"e_1_2_2_6_1","volume-title":"Large scale adversarial representation learning. arXiv preprint arXiv:1907.02544","author":"Donahue Jeff","year":"2019"},{"key":"e_1_2_2_7_1","doi-asserted-by":"crossref","unstructured":"Yuzhu Dong Andreas Aristidou Ariel Shamir Moshe Mahler and Eakta Jain. 2020. Adult2child: Motion Style Transfer using CycleGANs. In Motion Interaction and Games. 1--11.  Yuzhu Dong Andreas Aristidou Ariel Shamir Moshe Mahler and Eakta Jain. 2020. Adult2child: Motion Style Transfer using CycleGANs. In Motion Interaction and Games. 1--11.","DOI":"10.1145\/3424636.3426909"},{"key":"e_1_2_2_8_1","doi-asserted-by":"crossref","unstructured":"Han Du Erik Herrmann Janis Sprenger Klaus Fischer and Philipp Slusallek. 2019. Stylistic locomotion modeling and synthesis using variational generative models. In Motion Interaction and Games. 1--10.  Han Du Erik Herrmann Janis Sprenger Klaus Fischer and Philipp Slusallek. 2019. Stylistic locomotion modeling and synthesis using variational generative models. In Motion Interaction and Games. 1--10.","DOI":"10.1145\/3359566.3360083"},{"key":"e_1_2_2_9_1","volume-title":"A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576","author":"Gatys Leon A","year":"2015"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.265"},{"key":"e_1_2_2_11_1","volume-title":"Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30","author":"Heusel Martin","year":"2017"},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/MCG.2017.3271464"},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925975"},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/1186822.1073315"},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.167"},{"key":"e_1_2_2_16_1","doi-asserted-by":"crossref","unstructured":"Xun Huang Ming-Yu Liu Serge Belongie and Jan Kautz. 2018. Multimodal Unsupervised Image-to-Image Translation. arXiv:1804.04732 [cs.CV]  Xun Huang Ming-Yu Liu Serge Belongie and Jan Kautz. 2018. Multimodal Unsupervised Image-to-Image Translation. arXiv:1804.04732 [cs.CV]","DOI":"10.1007\/978-3-030-01219-9_11"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/1477926.1477927"},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.632"},{"key":"e_1_2_2_19_1","volume-title":"ICLR: International Conference on Learning Representations. 1--15","author":"Kingma Diederik P","year":"2015"},{"key":"e_1_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01246-5_3"},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-019-01284-z"},{"key":"e_1_2_2_22_1","volume-title":"Demystifying neural style transfer. arXiv preprint arXiv:1701.01036","author":"Li Yanghao","year":"2017"},{"key":"e_1_2_2_23_1","volume-title":"Hyung Jin Chang, and Jin Young Choi","author":"Lim Jongin","year":"2019"},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.01065"},{"key":"e_1_2_2_25_1","volume-title":"Computer Graphics Forum","author":"Mason Ian"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/1730804.1730811"},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-015-0816-y"},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/3340254"},{"key":"e_1_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.308"},{"key":"e_1_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/1553374.1553505"},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.437"},{"key":"e_1_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00707"},{"key":"e_1_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766999"},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.12328"},{"key":"e_1_2_2_35_1","volume-title":"Diversity-Sensitive Conditional Generative Adversarial Networks. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=rJliMh09F7","author":"Yang Dingdong","year":"2019"},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925955"},{"key":"e_1_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.244"}],"container-title":["Proceedings of the ACM on Computer Graphics and Interactive Techniques"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3480145","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3480145","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:31:16Z","timestamp":1750188676000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3480145"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,9,22]]},"references-count":37,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2021,9,22]]}},"alternative-id":["10.1145\/3480145"],"URL":"https:\/\/doi.org\/10.1145\/3480145","relation":{},"ISSN":["2577-6193"],"issn-type":[{"value":"2577-6193","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,9,22]]},"assertion":[{"value":"2021-09-27","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}