{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,31]],"date-time":"2025-12-31T22:19:22Z","timestamp":1767219562598,"version":"3.41.0"},"reference-count":43,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2019,7,12]],"date-time":"2019-07-12T00:00:00Z","timestamp":1562889600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2019,8,31]]},"abstract":"<jats:p>We present the first method to accurately track the invisible jaw based solely on the visible skin surface, without the need for any markers or augmentation of the actor. As such, the method can readily be integrated with off-the-shelf facial performance capture systems. The core idea is to learn a non-linear mapping from the skin deformation to the underlying jaw motion on a dataset where ground-truth jaw poses have been acquired, and then to retarget the mapping to new subjects. Solving for the jaw pose plays a central role in visual effects pipelines, since accurate jaw motion is required when retargeting to fantasy characters and for physical simulation. Currently, this task is performed mostly manually to achieve the desired level of accuracy, and the presented method has the potential to fully automate this labour intense and error prone process.<\/jats:p>","DOI":"10.1145\/3306346.3323044","type":"journal-article","created":{"date-parts":[[2019,7,12]],"date-time":"2019-07-12T19:04:08Z","timestamp":1562958248000},"page":"1-8","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":24,"title":["Accurate markerless jaw tracking for facial performance capture"],"prefix":"10.1145","volume":"38","author":[{"given":"Gaspard","family":"Zoss","sequence":"first","affiliation":[{"name":"DisneyResearch|Studios, ETH Zurich"}]},{"given":"Thabo","family":"Beeler","sequence":"additional","affiliation":[{"name":"DisneyResearch|Studios"}]},{"given":"Markus","family":"Gross","sequence":"additional","affiliation":[{"name":"DisneyResearch|Studios, ETH Zurich"}]},{"given":"Derek","family":"Bradley","sequence":"additional","affiliation":[{"name":"DisneyResearch|Studios"}]}],"member":"320","published-online":{"date-parts":[[2019,7,12]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"Sameer Agarwal Keir Mierle and Others. 2016. Ceres Solver http:\/\/ceres-solver.org.  Sameer Agarwal Keir Mierle and Others. 2016. Ceres Solver http:\/\/ceres-solver.org."},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jdsr.2009.04.001"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601182"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/2010324.1964970"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2017.08.029"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/2461912.2461976"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/1778765.1778778"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0003-9969(00)00015-7"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ajodo.2005.06.037"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766943"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601204"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1177\/00220345000790060501"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.archoralbio.2004.10.002"},{"key":"e_1_2_1_14_1","doi-asserted-by":"crossref","unstructured":"Graham Fyffe Tim Hawkins Chris Watts Wan-Chun Ma and Paul Debevec. 2011. Comprehensive Facial Performance Capture. In Eurographics.  Graham Fyffe Tim Hawkins Chris Watts Wan-Chun Ma and Paul Debevec. 2011. Comprehensive Facial Performance Capture. In Eurographics.","DOI":"10.1111\/j.1467-8659.2011.01888.x"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.13127"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/2508363.2508380"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1007\/BF02291478"},{"key":"e_1_2_1_18_1","doi-asserted-by":"crossref","unstructured":"Pei-Lun Hsieh Chongyang Ma Jihun Yu and Hao Li. 2015. Unconstrained Realtime Facial Performance Capture. In Computer Vision and Pattern Recognition (CVPR).  Pei-Lun Hsieh Chongyang Ma Jihun Yu and Hao Li. 2015. Unconstrained Realtime Facial Performance Capture. In Computer Vision and Pattern Recognition (CVPR).","DOI":"10.1109\/CVPR.2015.7298776"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.5555\/1577069.1755843"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.4012\/dmj.24.661"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3099564.3099581"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/2461912.2462019"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSEN.2018.2873349"},{"key":"e_1_2_1_24_1","first-page":"156","article-title":"Deep bite: A case report with chewing pattern and electromyographic activity before and after therapy with function generating bite","volume":"14","author":"Piancino M. G.","year":"2013","unstructured":"M. G. Piancino , T. Vallelonga , C. Debernardi , and P. Bracco . 2013 . Deep bite: A case report with chewing pattern and electromyographic activity before and after therapy with function generating bite . European Journal of Paediatric Dentistry 14 , 2 (2013), 156 -- 159 . M. G. Piancino, T. Vallelonga, C. Debernardi, and P. Bracco. 2013. Deep bite: A case report with chewing pattern and electromyographic activity before and after therapy with function generating bite. European Journal of Paediatric Dentistry 14, 2 (2013), 156--159.","journal-title":"European Journal of Paediatric Dentistry"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1590\/S1678-77572008000500004"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0021-9290(97)00012-2"},{"key":"e_1_2_1_27_1","volume-title":"Proceedings of the 2nd International Workshop on Artificial Neural Networks and Intelligent Information Processing 2006","author":"Santos Isa C T","year":"2006","unstructured":"Isa C T Santos , Jo\u00e3o Manuel R S Tavares , Joaquim G Mendes , and Manuel P F Paulo . 2006 . A System for Analysis of the 3D Mandibular Movement using Magnetic Sensors and Neuronal Networks . Proceedings of the 2nd International Workshop on Artificial Neural Networks and Intelligent Information Processing 2006 (2006), 54--63. Isa C T Santos, Jo\u00e3o Manuel R S Tavares, Joaquim G Mendes, and Manuel P F Paulo. 2006. A System for Analysis of the 3D Mandibular Movement using Magnetic Sensors and Neuronal Networks. Proceedings of the 2nd International Workshop on Artificial Neural Networks and Intelligent Information Processing 2006 (2006), 54--63."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/2661229.2661290"},{"key":"e_1_2_1_29_1","doi-asserted-by":"crossref","unstructured":"Supasorn Suwajanakorn Ira Kemelmacher-Shlizerman and Steven M Seitz. 2014. Total Moving Face Reconstruction. In ECCV.  Supasorn Suwajanakorn Ira Kemelmacher-Shlizerman and Steven M Seitz. 2014. Total Moving Face Reconstruction. In ECCV.","DOI":"10.1007\/978-3-319-10593-2_52"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jbiomech.2016.01.011"},{"key":"e_1_2_1_31_1","volume-title":"Proc. of IEEE ICCV.","author":"Tewari Ayush","year":"2017","unstructured":"Ayush Tewari , Michael Zoll\u00f6fer , Hyeongwoo Kim , Pablo Garrido , Florian Bernard , Patrick Perez , and Christian Theobalt . 2017 . MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction . In Proc. of IEEE ICCV. Ayush Tewari, Michael Zoll\u00f6fer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Perez, and Christian Theobalt. 2017. MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. In Proc. of IEEE ICCV."},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/2816795.2818056"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/2929464.2929475"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201350"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/2366145.2366206"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/2010324.1964972"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00221-014-4037-3"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1044\/1092-4388(2011\/10-0236)"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.archoralbio.2004.07.010"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925882"},{"key":"e_1_2_1_41_1","volume-title":"Building Anatomically Realistic Jaw Kinematics Model from Data. CoRR abs\/1805.0","author":"Yang Wenwu","year":"2018","unstructured":"Wenwu Yang , Nathan Marshak , Daniel S\u00fdkora , Srikumar Ramalingam , and Ladislav Kavan . 2018. Building Anatomically Realistic Jaw Kinematics Model from Data. CoRR abs\/1805.0 ( 2018 ). arXiv:1805.05903 http:\/\/arxiv.org\/abs\/1805.05903 Wenwu Yang, Nathan Marshak, Daniel S\u00fdkora, Srikumar Ramalingam, and Ladislav Kavan. 2018. Building Anatomically Realistic Jaw Kinematics Model from Data. CoRR abs\/1805.0 (2018). arXiv:1805.05903 http:\/\/arxiv.org\/abs\/1805.05903"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1046\/j.1365-2842.2000.00505.x"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201382"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3306346.3323044","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3306346.3323044","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T00:25:52Z","timestamp":1750206352000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3306346.3323044"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,7,12]]},"references-count":43,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2019,8,31]]}},"alternative-id":["10.1145\/3306346.3323044"],"URL":"https:\/\/doi.org\/10.1145\/3306346.3323044","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"type":"print","value":"0730-0301"},{"type":"electronic","value":"1557-7368"}],"subject":[],"published":{"date-parts":[[2019,7,12]]},"assertion":[{"value":"2019-07-12","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}