{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:26:19Z","timestamp":1750220779625,"version":"3.41.0"},"reference-count":31,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2019,6,3]],"date-time":"2019-06-03T00:00:00Z","timestamp":1559520000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"The Ministry of Science and Technology of the ROC","award":["MOST 106-2221-E-009-161-MY2"],"award-info":[{"award-number":["MOST 106-2221-E-009-161-MY2"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Comput. Graph. Interact. Tech."],"published-print":{"date-parts":[[2019,6,3]]},"abstract":"<jats:p>This paper presents an approach to assist the generation of agent-based cooperative animation using reinforcement learning. We focus on manipulation skills for box-shaped objects, including pushing, pulling, and moving objects in a relay way. There are a learning process and an animation process. In the learning process, different kinds of agents are trained using reinforcement learning. Policies are learned to control the agents to perform specific tasks. A physics simulator is adopted to simulate the interaction among objects. In the animation process, users animate agents with the learned policies. We propose several tools to intuitively create cooperative animations. We applied our method to generate several animations in which agents work together to finish tasks. A user study indicates that by using our tools, diverse cooperative animations can be easily created.<\/jats:p>","DOI":"10.1145\/3320283","type":"journal-article","created":{"date-parts":[[2019,6,4]],"date-time":"2019-06-04T16:01:38Z","timestamp":1559664098000},"page":"1-18","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":8,"title":["Agent-based cooperative animation for box-manipulation using reinforcement learning"],"prefix":"10.1145","volume":"2","author":[{"given":"Hsiang-Yu","family":"Yang","sequence":"first","affiliation":[{"name":"Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan, ROC"}]},{"given":"Sai-Keung","family":"Wong","sequence":"additional","affiliation":[{"name":"Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan, ROC"}]}],"member":"320","published-online":{"date-parts":[[2019,6,3]]},"reference":[{"volume-title":"Miles Brundage, and Anil Anthony Bharath.","year":"2017","author":"Arulkumaran Kai","key":"e_1_2_1_1_1"},{"doi-asserted-by":"publisher","key":"e_1_2_1_2_1","DOI":"10.1109\/TPAMI.2013.50"},{"doi-asserted-by":"publisher","key":"e_1_2_1_3_1","DOI":"10.1002\/cav.1826"},{"doi-asserted-by":"publisher","key":"e_1_2_1_4_1","DOI":"10.1145\/3272127.3275048"},{"doi-asserted-by":"publisher","key":"e_1_2_1_5_1","DOI":"10.1007\/s10846-010-9429-4"},{"doi-asserted-by":"publisher","key":"e_1_2_1_6_1","DOI":"10.1002\/cav.1779"},{"unstructured":"Nicolas Heess Srinivasan Sriram Jay Lemmon Josh Merel Greg Wayne Yuval Tassa Tom Erez Ziyu Wang Ali Eslami Martin Riedmiller etal 2017. Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286 (2017).  Nicolas Heess Srinivasan Sriram Jay Lemmon Josh Merel Greg Wayne Yuval Tassa Tom Erez Ziyu Wang Ali Eslami Martin Riedmiller et al. 2017. Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286 (2017).","key":"e_1_2_1_7_1"},{"volume-title":"Unity: A General Platform for Intelligent Agents. arXiv preprint arXiv:1809.02627","year":"2018","author":"Juliani Arthur","key":"e_1_2_1_8_1"},{"doi-asserted-by":"publisher","key":"e_1_2_1_9_1","DOI":"10.1109\/CIG.2016.7860433"},{"key":"e_1_2_1_10_1","article-title":"Evacuation simulation with consideration of obstacle removal and using game theory","author":"Lin Guan-Wen","year":"2018","journal-title":"Phys. Rev. E 97"},{"doi-asserted-by":"publisher","key":"e_1_2_1_11_1","DOI":"10.1145\/3197517.3201334"},{"doi-asserted-by":"publisher","key":"e_1_2_1_12_1","DOI":"10.1007\/s10514-014-9414-z"},{"doi-asserted-by":"crossref","unstructured":"Volodymyr Mnih Koray Kavukcuoglu David Silver Andrei A Rusu Joel Veness Marc G Bellemare Alex Graves Martin Riedmiller Andreas K Fidjeland Georg Ostrovski etal 2015. Human-level control through deep reinforcement learning. Nature 518 7540 (2015) 529.  Volodymyr Mnih Koray Kavukcuoglu David Silver Andrei A Rusu Joel Veness Marc G Bellemare Alex Graves Martin Riedmiller Andreas K Fidjeland Georg Ostrovski et al. 2015. Human-level control through deep reinforcement learning. Nature 518 7540 (2015) 529.","key":"e_1_2_1_13_1","DOI":"10.1038\/nature14236"},{"volume-title":"Control of memory, active perception, and action in minecraft. arXiv preprint arXiv:1605.09128","year":"2016","author":"Oh Junhyuk","key":"e_1_2_1_14_1"},{"doi-asserted-by":"publisher","key":"e_1_2_1_15_1","DOI":"10.1007\/s10458-005-2631-2"},{"volume-title":"DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills. arXiv preprint arXiv:1804.02717","year":"2018","author":"Peng Xue Bin","key":"e_1_2_1_16_1"},{"doi-asserted-by":"publisher","key":"e_1_2_1_17_1","DOI":"10.1145\/3072959.3073602"},{"doi-asserted-by":"publisher","key":"e_1_2_1_18_1","DOI":"10.1007\/11564096_32"},{"doi-asserted-by":"publisher","key":"e_1_2_1_19_1","DOI":"10.1109\/IROS.2016.7759653"},{"doi-asserted-by":"publisher","key":"e_1_2_1_20_1","DOI":"10.1109\/IROS.2013.6696520"},{"volume-title":"Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347","year":"2017","author":"Schulman John","key":"e_1_2_1_21_1"},{"doi-asserted-by":"publisher","key":"e_1_2_1_22_1","DOI":"10.1177\/105971230501300301"},{"volume-title":"Muhanad Hayder Mohammed Alkilabi, and Otar Akanyeti","year":"2018","author":"Tuci Elio","key":"e_1_2_1_23_1"},{"doi-asserted-by":"publisher","key":"e_1_2_1_24_1","DOI":"10.1145\/3197517.3201362"},{"doi-asserted-by":"publisher","key":"e_1_2_1_25_1","DOI":"10.1145\/3130800.3130833"},{"doi-asserted-by":"publisher","key":"e_1_2_1_26_1","DOI":"10.1145\/3190834.3190839"},{"doi-asserted-by":"publisher","key":"e_1_2_1_27_1","DOI":"10.1007\/s41095-017-0081-9"},{"volume-title":"Biologically inspired ant colony simulation. Computer Animation and Virtual Worlds","year":"2018","author":"Xiang Wei","key":"e_1_2_1_28_1"},{"doi-asserted-by":"publisher","key":"e_1_2_1_29_1","DOI":"10.1145\/2984511.2984585"},{"volume-title":"Towards vision-based deep reinforcement learning for robotic motion control. arXiv preprint arXiv: 1511.03791","year":"2015","author":"Zhang Fangyi","key":"e_1_2_1_30_1"},{"doi-asserted-by":"publisher","key":"e_1_2_1_31_1","DOI":"10.1145\/3274247.3274502"}],"container-title":["Proceedings of the ACM on Computer Graphics and Interactive Techniques"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3320283","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3320283","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T22:41:28Z","timestamp":1750200088000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3320283"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,6,3]]},"references-count":31,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2019,6,3]]}},"alternative-id":["10.1145\/3320283"],"URL":"https:\/\/doi.org\/10.1145\/3320283","relation":{},"ISSN":["2577-6193"],"issn-type":[{"type":"electronic","value":"2577-6193"}],"subject":[],"published":{"date-parts":[[2019,6,3]]},"assertion":[{"value":"2019-06-03","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}