{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,1]],"date-time":"2026-05-01T17:15:39Z","timestamp":1777655739984,"version":"3.51.4"},"reference-count":31,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2011,7,1]],"date-time":"2011-07-01T00:00:00Z","timestamp":1309478400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2011,7]]},"abstract":"<jats:p>\n            We present a new technique for passive and markerless facial performance capture based on\n            <jats:italic>anchor frames<\/jats:italic>\n            . Our method starts with high resolution per-frame geometry acquisition using state-of-the-art stereo reconstruction, and proceeds to establish a single triangle mesh that is propagated through the entire performance. Leveraging the fact that facial performances often contain repetitive subsequences, we identify\n            <jats:italic>anchor frames<\/jats:italic>\n            as those which contain similar facial expressions to a manually chosen reference expression. Anchor frames are automatically computed over one or even multiple performances. We introduce a robust image-space tracking method that computes pixel matches directly from the reference frame to all anchor frames, and thereby to the remaining frames in the sequence via sequential matching. This allows us to propagate one reconstructed frame to an entire sequence in parallel, in contrast to previous sequential methods. Our anchored reconstruction approach also limits tracker drift and robustly handles occlusions and motion blur. The parallel tracking and mesh propagation offer low computation times. Our technique will even automatically match anchor frames across different sequences captured on different occasions, propagating a single mesh to all performances.\n          <\/jats:p>","DOI":"10.1145\/2010324.1964970","type":"journal-article","created":{"date-parts":[[2011,7,26]],"date-time":"2011-07-26T14:17:46Z","timestamp":1311689866000},"page":"1-10","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":205,"title":["High-quality passive facial performance capture using anchor frames"],"prefix":"10.1145","volume":"30","author":[{"given":"Thabo","family":"Beeler","sequence":"first","affiliation":[{"name":"Disney Research Zurich and ETH Zurich"}]},{"given":"Fabian","family":"Hahn","sequence":"additional","affiliation":[{"name":"Disney Research Zurich"}]},{"given":"Derek","family":"Bradley","sequence":"additional","affiliation":[{"name":"Disney Research Zurich"}]},{"given":"Bernd","family":"Bickel","sequence":"additional","affiliation":[{"name":"Disney Research Zurich"}]},{"given":"Paul","family":"Beardsley","sequence":"additional","affiliation":[{"name":"Disney Research Zurich"}]},{"given":"Craig","family":"Gotsman","sequence":"additional","affiliation":[{"name":"Disney Research Zurich and Technion - Israel Institute of Technology"}]},{"given":"Robert W.","family":"Sumner","sequence":"additional","affiliation":[{"name":"Disney Research Zurich"}]},{"given":"Markus","family":"Gross","sequence":"additional","affiliation":[{"name":"Disney Research Zurich and ETH Zurich"}]}],"member":"320","published-online":{"date-parts":[[2011,7,25]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"publisher","unstructured":"Alexander O. Rogers M. Lambeth W. Chiang M. and Debevec P. 2009. The digital Emily project: photoreal facial modeling and animation. In ACM SIGGRAPH Courses 1--15. 10.1145\/1667239.1667251","DOI":"10.1145\/1667239.1667251"},{"key":"e_1_2_2_2_1","volume-title":"Proc. Vision, Modeling, and Visualization, 63--71","author":"Anuar N.","unstructured":"Anuar, N., and Guskov, I. 2004. Extracting animated meshes with adaptive motion estimation. In Proc. Vision, Modeling, and Visualization, 63--71."},{"key":"e_1_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/1833349.1778777"},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/1275808.1276419"},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1111\/1467-8659.t01-1-00712"},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/1399504.1360698"},{"key":"e_1_2_2_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/1833349.1778778"},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.5555\/794190.794617"},{"key":"e_1_2_2_9_1","doi-asserted-by":"crossref","unstructured":"Ekman P. and Friesen W. 1978. The facial action coding system: A technique for the measurement of facial movement. In Consulting Psychologists.","DOI":"10.1037\/t27734-000"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.5555\/791215.791488"},{"key":"e_1_2_2_11_1","volume-title":"Proc. CVPR, 1674--1681","author":"Furukawa Y.","unstructured":"Furukawa, Y., and Ponce, J. 2009. Dense 3D motion capture for human faces. In Proc. CVPR, 1674--1681."},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1111\/j.1467-8659.2011.01888.x"},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","unstructured":"Guenter B. Grimm C. Wood D. Malvar H. and Pighin F. 1998. Making faces. In Comp. Graphics 55--66. 10.1145\/280814.280822","DOI":"10.1145\/280814.280822"},{"key":"e_1_2_2_14_1","volume-title":"Proceedings International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT).","author":"Hern\u00e1ndez C.","unstructured":"Hern\u00e1ndez, C., and Vogiatzis, G. 2010. Self-calibrating a real-time monocular 3D facial capture system. In Proceedings International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT)."},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/1015706.1015811"},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/34.216724"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00371-005-0291-5"},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1023\/B:VISI.0000029664.99615.94"},{"key":"e_1_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.5555\/2383847.2383873"},{"key":"e_1_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/1409060.1409074"},{"key":"e_1_2_2_21_1","volume-title":"Proc. ICCV, 143--150","author":"Pighin F. H.","unstructured":"Pighin, F. H., Szeliski, R., and Salesin, D. 1999. Resynthesizing facial animation through 3D model-based tracking. In Proc. ICCV, 143--150."},{"key":"e_1_2_2_22_1","doi-asserted-by":"crossref","unstructured":"Popa T. South-Dickinson I. Bradley D. Sheffer A. and Heidrich W. 2010. Globally consistent space-time reconstruction. Comp. Graphics Forum (Proc. SGP) 1633--1642.","DOI":"10.1111\/j.1467-8659.2010.01772.x"},{"key":"e_1_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/1399504.1360616"},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/1409060.1409063"},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/1015706.1015736"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/1516522.1516526"},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1111\/j.1467-8659.2004.00800.x"},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/97879.97906"},{"key":"e_1_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/1731047.1731055"},{"key":"e_1_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00371-008-0259-3"},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/1015706.1015759"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/2010324.1964970","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/2010324.1964970","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T11:06:23Z","timestamp":1750244783000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/2010324.1964970"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2011,7]]},"references-count":31,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2011,7]]}},"alternative-id":["10.1145\/2010324.1964970"],"URL":"https:\/\/doi.org\/10.1145\/2010324.1964970","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2011,7]]},"assertion":[{"value":"2011-07-25","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}