{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,3]],"date-time":"2026-04-03T15:41:35Z","timestamp":1775230895167,"version":"3.50.1"},"reference-count":48,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2019,11,8]],"date-time":"2019-11-08T00:00:00Z","timestamp":1573171200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100004489","name":"Mitacs","doi-asserted-by":"publisher","award":["IT12672"],"award-info":[{"award-number":["IT12672"]}],"id":[{"id":"10.13039\/501100004489","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100002790","name":"Natural Sciences and Engineering Research Council of Canada","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100002790","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2019,12,31]]},"abstract":"<jats:p>Interactive control of self-balancing, physically simulated humanoids is a long standing problem in the field of real-time character animation. While physical simulation guarantees realistic interactions in the virtual world, simulated characters can appear unnatural if they perform unusual movements in order to maintain balance. Therefore, obtaining a high level of responsiveness to user control, runtime performance, and diversity has often been overlooked in exchange for motion quality. Recent work in the field of deep reinforcement learning has shown that training physically simulated characters to follow motion capture clips can yield high quality tracking results. We propose a two-step approach for building responsive simulated character controllers from unstructured motion capture data. First, meaningful features from the data such as movement direction, heading direction, speed, and locomotion style, are interactively specified and drive a kinematic character controller implemented using motion matching. Second, reinforcement learning is used to train a simulated character controller that is general enough to track the entire distribution of motion that can be generated by the kinematic controller. Our design emphasizes responsiveness to user input, visual quality, and low runtime cost for application in video-games.<\/jats:p>","DOI":"10.1145\/3355089.3356536","type":"journal-article","created":{"date-parts":[[2019,11,8]],"date-time":"2019-11-08T20:27:58Z","timestamp":1573244878000},"page":"1-11","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":187,"title":["DReCon"],"prefix":"10.1145","volume":"38","author":[{"given":"Kevin","family":"Bergamin","sequence":"first","affiliation":[{"name":"McGill University, Canada"}]},{"given":"Simon","family":"Clavet","sequence":"additional","affiliation":[{"name":"Ubisoft La Forge, Canada"}]},{"given":"Daniel","family":"Holden","sequence":"additional","affiliation":[{"name":"Ubisoft La Forge, Canada"}]},{"given":"James Richard","family":"Forbes","sequence":"additional","affiliation":[{"name":"McGill University, Canada"}]}],"member":"320","published-online":{"date-parts":[[2019,11,8]]},"reference":[{"key":"e_1_2_2_1_1","volume-title":"Proc. of GDC","author":"Bollo David","year":"2016","unstructured":"David Bollo. 2016. Inertialization: High-Performance Animation Transitions in 'Gears of War'. In Proc. of GDC 2018."},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3084363.3085069"},{"key":"e_1_2_2_3_1","volume-title":"CoRR abs\/1606.01540","author":"Brockman Greg","year":"2016","unstructured":"Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. OpenAI Gym. CoRR abs\/1606.01540 (2016). arXiv:1606.01540 http:\/\/arxiv.org\/abs\/1606.01540"},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/2485895.2485906"},{"key":"e_1_2_2_5_1","volume-title":"Proc. of I3D","author":"Buttner Michael","year":"2019","unstructured":"Michael Buttner. 2019. Machine Learning for Motion Synthesis and Character Control in Games. In Proc. of I3D 2019."},{"key":"e_1_2_2_6_1","volume-title":"Soft Constraints. In Proc. of GDC","author":"Catto Erin","year":"2011","unstructured":"Erin Catto. 2011. Soft Constraints. In Proc. of GDC 2011."},{"key":"e_1_2_2_7_1","doi-asserted-by":"publisher","unstructured":"Nuttapong Chentanez Matthias M\u00fcller Miles Macklin Viktor Makoviychuk and Stefan Jeschke. 2018. Physics-based motion capture imitation with deep reinforcement learning. 1--10. 10.1145\/3274247.3274506","DOI":"10.1145\/3274247.3274506"},{"key":"e_1_2_2_8_1","volume-title":"Proc. of GDC","author":"Clavet Simon","year":"2016","unstructured":"Simon Clavet. 2016. Motion Matching and The Road to Next-Gen Animation. In Proc. of GDC 2016."},{"key":"e_1_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/1778765.1781156"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/2776880.2792704"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.13096"},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1111\/j.1467-8659.2008.01134.x"},{"key":"e_1_2_2_13_1","unstructured":"Prafulla Dhariwal Christopher Hesse Oleg Klimov Alex Nichol Matthias Plappert Alec Radford John Schulman Szymon Sidor Yuhuai Wu and Peter Zhokhov. 2017. OpenAI Baselines. https:\/\/github.com\/openai\/baselines."},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/2786784.2786802"},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1111\/j.1467-8659.2012.03189.x"},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/2508363.2508399"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1007\/3-540-32494-1_4"},{"key":"e_1_2_2_18_1","volume-title":"Proc. of GDC","author":"Harrower Geof","year":"2018","unstructured":"Geof Harrower. 2018. Real Player Motion Tech in 'EA Sports UFC 3'. In Proc. of GDC 2018."},{"key":"e_1_2_2_19_1","volume-title":"Emergence of Locomotion Behaviours in Rich Environments. CoRR abs\/1707.02286","author":"Heess Nicolas","year":"2017","unstructured":"Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin A. Riedmiller, and David Silver. 2017. Emergence of Locomotion Behaviours in Rich Environments. CoRR abs\/1707.02286 (2017). arXiv:1707.02286 http:\/\/arxiv.org\/abs\/1707.02286"},{"key":"e_1_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201302"},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073663"},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/2019627.2019637"},{"key":"e_1_2_2_23_1","unstructured":"Andrew Kermse. 2004. Game Programming Gems 4. (2004) 95--101."},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/1778765.1781155"},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/2661229.2661233"},{"key":"e_1_2_2_26_1","volume-title":"GPU-Accelerated Robotic Simulation for Distributed Reinforcement Learning. CoRR abs\/1810.05762","author":"Liang Jacky","year":"2018","unstructured":"Jacky Liang, Viktor Makoviychuk, Ankur Handa, Nuttapong Chentanez, Miles Macklin, and Dieter Fox. 2018. GPU-Accelerated Robotic Simulation for Distributed Reinforcement Learning. CoRR abs\/1810.05762 (2018). arXiv:1810.05762 http:\/\/arxiv.org\/abs\/1810.05762"},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.2990496"},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201315"},{"key":"e_1_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/2893476"},{"key":"e_1_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.12571"},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/1778765.1778865"},{"key":"e_1_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/1531326.1531386"},{"key":"e_1_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/1576246.1531387"},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.2478\/v10198-012-0034-2"},{"key":"e_1_2_2_35_1","volume-title":"Article 205 (Nov.","author":"Park Soohwan","year":"2019","unstructured":"Soohwan Park, Hoseok Ryu, Sunmin Lee, and Jehee Lee. 2019. Learning predict-and-simulate policies from unorganized human motion data. ACM Trans. Graph. 38, 6, Article 205 (Nov. 2019)."},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201311"},{"key":"e_1_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766910"},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925881"},{"key":"e_1_2_2_39_1","volume-title":"DeepLoco: Dynamic Locomotion Skills Using Hierarchical Deep Reinforcement Learning. ACM Transactions on Graphics (Proc. SIGGRAPH 2017)","author":"Peng Xue Bin","year":"2017","unstructured":"Xue Bin Peng, Glen Berseth, KangKang Yin, and Michiel van de Panne. 2017. DeepLoco: Dynamic Locomotion Skills Using Hierarchical Deep Reinforcement Learning. ACM Transactions on Graphics (Proc. SIGGRAPH 2017) 36, 4 (2017)."},{"key":"e_1_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/122718.122755"},{"key":"e_1_2_2_41_1","volume-title":"Proximal Policy Optimization Algorithms. CoRR abs\/1707.06347","author":"Schulman John","year":"2017","unstructured":"John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal Policy Optimization Algorithms. CoRR abs\/1707.06347 (2017). arXiv:1707.06347 http:\/\/arxiv.org\/abs\/1707.06347"},{"key":"e_1_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/1276377.1276511"},{"key":"e_1_2_2_43_1","volume-title":"Barto","author":"Sutton Richard S.","year":"2018","unstructured":"Richard S. Sutton and Andrew G. Barto. 2018. Reinforcement Learning: An Introduction. MIT Press."},{"key":"e_1_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/IROS.2012.6386025"},{"key":"e_1_2_2_45_1","unstructured":"Vicon. 2018. Vicon Software. https:\/\/www.vicon.com\/products\/software\/"},{"key":"e_1_2_2_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/1276377.1276509"},{"key":"e_1_2_2_47_1","volume-title":"Proc. of GDC","author":"Zinno Fabio","year":"2019","unstructured":"Fabio Zinno. 2019. ML Tutorial Day: From Motion Matching to Motion Synthesis, and All the Hurdles In Between. In Proc. of GDC 2019."},{"key":"e_1_2_2_48_1","doi-asserted-by":"publisher","DOI":"10.1145\/545261.545276"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3355089.3356536","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3355089.3356536","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T23:44:41Z","timestamp":1750203881000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3355089.3356536"}},"subtitle":["data-driven responsive control of physics-based characters"],"short-title":[],"issued":{"date-parts":[[2019,11,8]]},"references-count":48,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2019,12,31]]}},"alternative-id":["10.1145\/3355089.3356536"],"URL":"https:\/\/doi.org\/10.1145\/3355089.3356536","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,11,8]]},"assertion":[{"value":"2019-11-08","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}