{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,13]],"date-time":"2025-12-13T23:09:35Z","timestamp":1765667375624,"version":"3.41.0"},"reference-count":30,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2021,12,1]],"date-time":"2021-12-01T00:00:00Z","timestamp":1638316800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2021,12]]},"abstract":"<jats:p>\n            We present a novel technique that enables 3D artists to synthesize camera motions in virtual environments following a\n            <jats:italic>camera style<\/jats:italic>\n            , while enforcing user-designed camera keyframes as constraints along the sequence. To solve this constrained motion in-betweening problem, we design and train a camera motion generator from a collection of temporal cinematic features (camera and actor motions) using a conditioning on target keyframes. We further condition the generator with a\n            <jats:italic>style code<\/jats:italic>\n            to control how to perform the interpolation between the keyframes. Style codes are generated by training a second network that encodes different camera behaviors in a compact latent space, the\n            <jats:italic>camera style space.<\/jats:italic>\n            Camera behaviors are defined as temporal correlations between actor features and camera motions and can be extracted from real or synthetic film clips. We further extend the system by incorporating a fine control of camera speed and direction via a hidden state mapping technique. We evaluate our method on two aspects: i) the capacity to synthesize style-aware camera trajectories with user defined keyframes; and ii) the capacity to ensure that in-between motions still comply with the reference camera style while satisfying the keyframe constraints. As a result, our system is the first style-aware keyframe in-betweening technique for camera control that balances style-driven automation with precise and interactive control of keyframes.\n          <\/jats:p>","DOI":"10.1145\/3478513.3480533","type":"journal-article","created":{"date-parts":[[2021,12,10]],"date-time":"2021-12-10T18:29:20Z","timestamp":1639160960000},"page":"1-13","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":22,"title":["Camera keyframing with style and control"],"prefix":"10.1145","volume":"40","author":[{"given":"Hongda","family":"Jiang","sequence":"first","affiliation":[{"name":"Peking University, China"}]},{"given":"Marc","family":"Christie","sequence":"additional","affiliation":[{"name":"University Rennes, France"}]},{"given":"Xi","family":"Wang","sequence":"additional","affiliation":[{"name":"University Rennes, France"}]},{"given":"Libin","family":"Liu","sequence":"additional","affiliation":[{"name":"Peking University, China"}]},{"given":"Bin","family":"Wang","sequence":"additional","affiliation":[{"name":"Beijing Institute for General Artificial Intelligence, China"}]},{"given":"Baoquan","family":"Chen","sequence":"additional","affiliation":[{"name":"Peking University, China"}]}],"member":"320","published-online":{"date-parts":[[2021,12,10]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.5555\/1894345.1894359"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/38.7751"},{"key":"e_1_2_2_3_1","doi-asserted-by":"crossref","unstructured":"Rogerio Bonatti Arthur Bucker Sebastian Scherer Mustafa Mukadam and Jessica Hodgins. 2020a. Batteries camera action! Learning a semantic control space for expressive robot cinematography. arXiv:2011.10118 [cs.CV]  Rogerio Bonatti Arthur Bucker Sebastian Scherer Mustafa Mukadam and Jessica Hodgins. 2020a. Batteries camera action! Learning a semantic control space for expressive robot cinematography. arXiv:2011.10118 [cs.CV]","DOI":"10.1109\/ICRA48506.2021.9560745"},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1002\/rob.21931"},{"key":"e_1_2_2_5_1","volume-title":"Proc. of Nucl. ai","author":"B\u00fcttner Michael","year":"2015","unstructured":"Michael B\u00fcttner and Simon Clavet . 2015 . Motion Matching-The Road to Next Gen Animation . Proc. of Nucl. ai (2015). Michael B\u00fcttner and Simon Clavet. 2015. Motion Matching-The Road to Next Gen Animation. Proc. of Nucl. ai (2015)."},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3322938"},{"key":"e_1_2_2_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/383259.383287"},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/2822013.2822025"},{"key":"e_1_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/2522628.2522899"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.5555\/2887007.2887112"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2003.1217599"},{"key":"e_1_2_2_12_1","doi-asserted-by":"crossref","unstructured":"Mirko Gschwindt Efe Camci Rogerio Bonatti Wenshan Wang Erdal Kayacan and Sebastian Scherer. 2019. Can a Robot Become a Movie Director? Learning Artistic Principles for Aerial Cinematography. arXiv:1904.02579 [cs.RO]  Mirko Gschwindt Efe Camci Rogerio Bonatti Wenshan Wang Erdal Kayacan and Sebastian Scherer. 2019. Can a Robot Become a Movie Director? Learning Artistic Principles for Aerial Cinematography. arXiv:1904.02579 [cs.RO]","DOI":"10.1109\/IROS40897.2019.8967592"},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/3386569.3392480"},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073663"},{"key":"e_1_2_2_16_1","unstructured":"Chong Huang Yuanjie Dang Peng Chen Xin Yang etal 2019a. One-Shot Imitation Filming of Human Motion Videos. arXiv preprint arXiv:1912.10609 (2019).  Chong Huang Yuanjie Dang Peng Chen Xin Yang et al. 2019a. One-Shot Imitation Filming of Human Motion Videos. arXiv preprint arXiv:1912.10609 (2019)."},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00437"},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICRA.2019.8793915"},{"key":"e_1_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.5555\/3151666.3151678"},{"key":"e_1_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58548-8_41"},{"key":"e_1_2_2_21_1","volume-title":"Adaptive mixtures of local experts. Neural computation 3, 1","author":"Jacobs Robert A","year":"1991","unstructured":"Robert A Jacobs , Michael I Jordan , Steven J Nowlan , and Geoffrey E Hinton . 1991. Adaptive mixtures of local experts. Neural computation 3, 1 ( 1991 ), 79--87. Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. 1991. Adaptive mixtures of local experts. Neural computation 3, 1 (1991), 79--87."},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3386569.3392427"},{"key":"e_1_2_2_23_1","volume-title":"Kingma and Jimmy Ba","author":"Diederick","year":"2015","unstructured":"Diederick P. Kingma and Jimmy Ba . 2015 . Adam : A method for stochastic optimization. In Int'l Conf. Learning Representations . Diederick P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Int'l Conf. Learning Representations."},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.5555\/2421731.2421742"},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766965"},{"key":"e_1_2_2_26_1","volume-title":"Graphics Interface Conference, GI'00","author":"Marchand Eric","year":"2000","unstructured":"Eric Marchand and Nicolas Courty . 2000 . Image-based virtual camera motion strategies . In Graphics Interface Conference, GI'00 . Morgan Kaufmann, 69--76. Eric Marchand and Nicolas Courty. 2000. Image-based virtual camera motion strategies. In Graphics Interface Conference, GI'00. Morgan Kaufmann, 69--76."},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/2366145.2366172"},{"key":"e_1_2_2_28_1","volume-title":"AISB symposium on AI and creativity in entertainment and visual art","volume":"1","author":"Olivier Patrick","year":"1999","unstructured":"Patrick Olivier , Nicolas Halper , Jon Pickering , and Pamela Luna . 1999 . Visual composition as optimisation . In AISB symposium on AI and creativity in entertainment and visual art , Vol. 1 . 22--30. Patrick Olivier, Nicolas Halper, Jon Pickering, and Pamela Luna. 1999. Visual composition as optimisation. In AISB symposium on AI and creativity in entertainment and visual art, Vol. 1. 22--30."},{"key":"e_1_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/1599470.1599478"},{"key":"e_1_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/54852.378507"},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3274247.3274502"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3478513.3480533","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3478513.3480533","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:11:40Z","timestamp":1750191100000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3478513.3480533"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,12]]},"references-count":30,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2021,12]]}},"alternative-id":["10.1145\/3478513.3480533"],"URL":"https:\/\/doi.org\/10.1145\/3478513.3480533","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"type":"print","value":"0730-0301"},{"type":"electronic","value":"1557-7368"}],"subject":[],"published":{"date-parts":[[2021,12]]},"assertion":[{"value":"2021-12-10","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}