{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,24]],"date-time":"2026-02-24T18:00:57Z","timestamp":1771956057284,"version":"3.50.1"},"reference-count":52,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2021,6,30]],"date-time":"2021-06-30T00:00:00Z","timestamp":1625011200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2021,6,30]]},"abstract":"<jats:p>We present a text-based tool for editing talking-head video that enables an iterative editing workflow. On each iteration users can edit the wording of the speech, further refine mouth motions if necessary to reduce artifacts, and manipulate non-verbal aspects of the performance by inserting mouth gestures (e.g., a smile) or changing the overall performance style (e.g., energetic, mumble). Our tool requires only 2 to 3 minutes of the target actor video and it synthesizes the video for each iteration in about 40 seconds, allowing users to quickly explore many editing possibilities as they iterate. Our approach is based on two key ideas. (1) We develop a fast phoneme search algorithm that can quickly identify phoneme-level subsequences of the source repository video that best match a desired edit. This enables our fast iteration loop. (2) We leverage a large repository of video of a source actor and develop a new self-supervised neural retargeting technique for transferring the mouth motions of the source actor to the target actor. This allows us to work with relatively short target actor videos, making our approach applicable in many real-world editing scenarios. Finally, our, refinement and performance controls give users the ability to further fine-tune the synthesized results.<\/jats:p>","DOI":"10.1145\/3449063","type":"journal-article","created":{"date-parts":[[2021,8,1]],"date-time":"2021-08-01T18:35:55Z","timestamp":1627842955000},"page":"1-14","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":24,"title":["Iterative Text-Based Editing of Talking-Heads Using Neural Retargeting"],"prefix":"10.1145","volume":"40","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5294-9697","authenticated-orcid":false,"given":"Xinwei","family":"Yao","sequence":"first","affiliation":[{"name":"Stanford University, Stanford, CA, USA"}]},{"given":"Ohad","family":"Fried","sequence":"additional","affiliation":[{"name":"The Interdisciplinary Center Herzliya, Herzliya, Israel"}]},{"given":"Kayvon","family":"Fatahalian","sequence":"additional","affiliation":[{"name":"Stanford University, Stanford, CA, USA"}]},{"given":"Maneesh","family":"Agrawala","sequence":"additional","affiliation":[{"name":"Stanford University, Stanford, CA, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,8]]},"reference":[{"key":"e_1_2_2_1_1","unstructured":"Google LLC. 2020a. Google Cloud Speech to Text API. https:\/\/cloud.google.com\/speech-to-text Google LLC. 2020a. Google Cloud Speech to Text API. https:\/\/cloud.google.com\/speech-to-text"},{"key":"e_1_2_2_2_1","unstructured":"Google LLC. 2020b. Google Cloud Text to Speech API. https:\/\/cloud.google.com\/text-to-speech Google LLC. 2020b. Google Cloud Text to Speech API. https:\/\/cloud.google.com\/text-to-speech"},{"key":"e_1_2_2_3_1","unstructured":"Descript Inc. 2020. Lyrebird AI. https:\/\/www.descript.com\/lyrebird-ai Descript Inc. 2020. Lyrebird AI. https:\/\/www.descript.com\/lyrebird-ai"},{"key":"e_1_2_2_4_1","unstructured":"Rev.com Inc. 2020. Rev. https:\/\/rev.com Rev.com Inc. 2020. Rev. https:\/\/rev.com"},{"key":"e_1_2_2_5_1","unstructured":"Mart\u00edn Abadi Ashish Agarwal Paul Barham Eugene Brevdo Zhifeng Chen Craig Citro Greg S. Corrado Andy Davis Jeffrey Dean Matthieu Devin Sanjay Ghemawat Ian Goodfellow Andrew Harp Geoffrey Irving Michael Isard Yangqing Jia Rafal Jozefowicz Lukasz Kaiser Manjunath Kudlur Josh Levenberg Dandelion Man\u00e9 Rajat Monga Sherry Moore Derek Murray Chris Olah Mike Schuster Jonathon Shlens Benoit Steiner Ilya Sutskever Kunal Talwar Paul Tucker Vincent Vanhoucke Vijay Vasudevan Fernanda Vi\u00e9gas Oriol Vinyals Pete Warden Martin Wattenberg Martin Wicke Yuan Yu and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https:\/\/www.tensorflow.org\/ Software available from tensorflow.org. Mart\u00edn Abadi Ashish Agarwal Paul Barham Eugene Brevdo Zhifeng Chen Craig Citro Greg S. Corrado Andy Davis Jeffrey Dean Matthieu Devin Sanjay Ghemawat Ian Goodfellow Andrew Harp Geoffrey Irving Michael Isard Yangqing Jia Rafal Jozefowicz Lukasz Kaiser Manjunath Kudlur Josh Levenberg Dandelion Man\u00e9 Rajat Monga Sherry Moore Derek Murray Chris Olah Mike Schuster Jonathon Shlens Benoit Steiner Ilya Sutskever Kunal Talwar Paul Tucker Vincent Vanhoucke Vijay Vasudevan Fernanda Vi\u00e9gas Oriol Vinyals Pete Warden Martin Wattenberg Martin Wicke Yuan Yu and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https:\/\/www.tensorflow.org\/ Software available from tensorflow.org."},{"key":"e_1_2_2_6_1","volume-title":"Workshop on Media Forensics at Conference on Computer Vision and Pattern Recognition (CVPR'20)","author":"Agarwal Shruti","year":"2020","unstructured":"Shruti Agarwal , Hany Farid , Ohad Fried , and Maneesh Agrawala . 2020 . Detecting deep-fake videos from phoneme-viseme mismatches . In Workshop on Media Forensics at Conference on Computer Vision and Pattern Recognition (CVPR'20) . Seattle, WA. Shruti Agarwal, Hany Farid, Ohad Fried, and Maneesh Agrawala. 2020. Detecting deep-fake videos from phoneme-viseme mismatches. In Workshop on Media Forensics at Conference on Computer Vision and Pattern Recognition (CVPR'20). Seattle, WA."},{"key":"e_1_2_2_7_1","volume-title":"Cohen","author":"Averbuch-Elor Hadar","year":"2017","unstructured":"Hadar Averbuch-Elor , Daniel Cohen-Or , Johannes Kopf , and Michael F . Cohen . 2017 . Bringing portraits to life. ACM Transactions on Graphics (Proceeding of SIGGRAPH Asia 2017) 36, 6 (Nov. 2017), 196:1\u201313. DOI:https:\/\/doi.org\/10.1145\/3130800.3130818 10.1145\/3130800.3130818 Hadar Averbuch-Elor, Daniel Cohen-Or, Johannes Kopf, and Michael F. Cohen. 2017. Bringing portraits to life. ACM Transactions on Graphics (Proceeding of SIGGRAPH Asia 2017) 36, 6 (Nov. 2017), 196:1\u201313. DOI:https:\/\/doi.org\/10.1145\/3130800.3130818"},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/2185520.2185563"},{"key":"e_1_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/311535.311556"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/258734.258880"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/1073368.1073388"},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01234-2_32"},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00802"},{"key":"e_1_2_2_14_1","volume-title":"British Machine Vision Conference.","author":"Chung Joon Son","year":"2017","unstructured":"Joon Son Chung , Amir Jamaludin , and Andrew Zisserman . 2017 . You said that? In British Machine Vision Conference. Joon Son Chung, Amir Jamaludin, and Andrew Zisserman. 2017. You said that? In British Machine Vision Conference."},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925984"},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/566654.566594"},{"key":"e_1_2_2_17_1","volume-title":"Eurographics Symposium on Rendering - DL-only and Industry Track, Tamy Boubekeur and Pradeep Sen (Eds.). The Eurographics Association. DOI:https:\/\/doi.org\/10","author":"Fried Ohad","year":"2019","unstructured":"Ohad Fried and Maneesh Agrawala . 2019 . Puppet dubbing . In Eurographics Symposium on Rendering - DL-only and Industry Track, Tamy Boubekeur and Pradeep Sen (Eds.). The Eurographics Association. DOI:https:\/\/doi.org\/10 .2312\/sr.20191220 10.2312\/sr.20191220 Ohad Fried and Maneesh Agrawala. 2019. Puppet dubbing. In Eurographics Symposium on Rendering - DL-only and Industry Track, Tamy Boubekeur and Pradeep Sen (Eds.). The Eurographics Association. DOI:https:\/\/doi.org\/10.2312\/sr.20191220"},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323028"},{"key":"e_1_2_2_20_1","volume-title":"IEEE Conference on Computer Vision and Pattern Recognition","author":"Garrido P.","year":"2014","unstructured":"P. Garrido , L. Valgaerts , O. Rehmsen , T. Thormaehlen , P. Perez , and C. Theobalt . 2014. Automatic face reenactment . In IEEE Conference on Computer Vision and Pattern Recognition , Columbus, OH, USA. 4217--4224. DOI:10.1109\/CVPR. 2014 .537 10.1109\/CVPR.2014.537 P. Garrido, L. Valgaerts, O. Rehmsen, T. Thormaehlen, P. Perez, and C. Theobalt. 2014. Automatic face reenactment. In IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA. 4217--4224. DOI:10.1109\/CVPR.2014.537"},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.12552"},{"key":"e_1_2_2_22_1","article-title":"Reconstruction of personalized 3D face rigs from monocular video","volume":"35","author":"Garrido Pablo","year":"2016","unstructured":"Pablo Garrido , Michael Zollh\u00f6fer , Dan Casas , Levi Valgaerts , Kiran Varanasi , Patrick P\u00e9rez , and Christian Theobalt . 2016 . Reconstruction of personalized 3D face rigs from monocular video . ACM Trans. Graph. 35 , 3, (May 2016), Article 28, 15 pages. DOI:https:\/\/doi.org\/10.1145\/2890493 10.1145\/2890493 Pablo Garrido, Michael Zollh\u00f6fer, Dan Casas, Levi Valgaerts, Kiran Varanasi, Patrick P\u00e9rez, and Christian Theobalt. 2016. Reconstruction of personalized 3D face rigs from monocular video. ACM Trans. Graph. 35, 3, (May 2016), Article 28, 15 pages. DOI:https:\/\/doi.org\/10.1145\/2890493","journal-title":"ACM Trans. Graph."},{"key":"e_1_2_2_23_1","volume-title":"SIGGRAPH Asia 2018 Technical Papers (SIGGRAPH Asia\u201918). ACM","author":"Geng Jiahao","unstructured":"Jiahao Geng , Tianjia Shao , Youyi Zheng , Yanlin Weng , and Kun Zhou . 2018. Warp-guided GANs for single-photo facial animation . In SIGGRAPH Asia 2018 Technical Papers (SIGGRAPH Asia\u201918). ACM , New York, NY , Article 231, 231:1\u2013231:12 pages. http:\/\/doi.acm.org\/10.1145\/3272127.3275043 Jiahao Geng, Tianjia Shao, Youyi Zheng, Yanlin Weng, and Kun Zhou. 2018. Warp-guided GANs for single-photo facial animation. In SIGGRAPH Asia 2018 Technical Papers (SIGGRAPH Asia\u201918). ACM, New York, NY, Article 231, 231:1\u2013231:12 pages. http:\/\/doi.acm.org\/10.1145\/3272127.3275043"},{"key":"e_1_2_2_24_1","volume-title":"Yonghui Wu, et\u00a0al.","author":"Jia Ye","year":"2018","unstructured":"Ye Jia , Yu Zhang , Ron Weiss , Quan Wang , Jonathan Shen , Fei Ren , Patrick Nguyen , Ruoming Pang , Ignacio Lopez Moreno , Yonghui Wu, et\u00a0al. 2018 . Transfer learning from speaker verification to multispeaker text-to-speech synthesis. In Advances in Neural Information Processing Systems . 4480\u20134490. Ye Jia, Yu Zhang, Ron Weiss, Quan Wang, Jonathan Shen, Fei Ren, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, Yonghui Wu, et\u00a0al. 2018. Transfer learning from speaker verification to multispeaker text-to-speech synthesis. In Advances in Neural Information Processing Systems. 4480\u20134490."},{"key":"e_1_2_2_25_1","volume-title":"Proceedings of the European Conference on Computer Vision (ECCV'10)","author":"Kemelmacher-Shlizerman Ira","unstructured":"Ira Kemelmacher-Shlizerman , Aditya Sankar , Eli Shechtman , and Steven M. Seitz . 2010. Being John Malkovich . In Proceedings of the European Conference on Computer Vision (ECCV'10) . 341\u2013353. DOI:https:\/\/doi.org\/10.1007\/978-3-642-15549-9_25 10.1007\/978-3-642-15549-9_25 Ira Kemelmacher-Shlizerman, Aditya Sankar, Eli Shechtman, and Steven M. Seitz. 2010. Being John Malkovich. In Proceedings of the European Conference on Computer Vision (ECCV'10). 341\u2013353. DOI:https:\/\/doi.org\/10.1007\/978-3-642-15549-9_25"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3355089.3356500"},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201283"},{"key":"e_1_2_2_28_1","volume-title":"3rd International Conference on Learning Representations (ICLR\u201915)","author":"Diederik","unstructured":"Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization . In 3rd International Conference on Learning Representations (ICLR\u201915) , Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). http:\/\/arxiv.org\/abs\/1412.6980 Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations (ICLR\u201915), Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). http:\/\/arxiv.org\/abs\/1412.6980"},{"key":"e_1_2_2_29_1","volume-title":"Jose Sotelo, Alexandre de Brebisson, Yoshua Bengio, and Aaron Courville.","author":"Kumar Kundan","year":"2019","unstructured":"Kundan Kumar , Rithesh Kumar , Thibault de Boissiere , Lucas Gestin , Wei Zhen Teoh , Jose Sotelo, Alexandre de Brebisson, Yoshua Bengio, and Aaron Courville. 2019 . MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis . arxiv:eess.AS\/1910.06711 Kundan Kumar, Rithesh Kumar, Thibault de Boissiere, Lucas Gestin, Wei Zhen Teoh, Jose Sotelo, Alexandre de Brebisson, Yoshua Bengio, and Aaron Courville. 2019. MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis. arxiv:eess.AS\/1910.06711"},{"key":"e_1_2_2_30_1","unstructured":"Rithesh Kumar Jose Sotelo Kundan Kumar Alexandre de Brebisson and Yoshua Bengio. 2017. ObamaNet: Photo-realistic lip-sync from text. arxiv:cs.CV\/1801.01442 Rithesh Kumar Jose Sotelo Kundan Kumar Alexandre de Brebisson and Yoshua Bengio. 2017. ObamaNet: Photo-realistic lip-sync from text. arxiv:cs.CV\/1801.01442"},{"key":"e_1_2_2_31_1","first-page":"707","article-title":"Binary codes capable of correcting deletions, insertions and reversals","volume":"10","author":"Levenshtein V.I.","year":"1966","unstructured":"V.I. Levenshtein . 1966 . Binary codes capable of correcting deletions, insertions and reversals . In Soviet Physics Doklady , Vol. 10. 707 . V.I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions and reversals. In Soviet Physics Doklady, Vol. 10. 707.","journal-title":"Soviet Physics Doklady"},{"key":"e_1_2_2_32_1","volume-title":"IEEE International Conference on Multimedia and Expo","author":"Liu K.","year":"2011","unstructured":"K. Liu and J. Ostermann . 2011. Realistic facial expression synthesis for an image-based talking head . In IEEE International Conference on Multimedia and Expo , Barcelona, Spain. 1--6. DOI:10.1109\/ICME. 2011 .6011835 10.1109\/ICME.2011.6011835 K. Liu and J. Ostermann. 2011. Realistic facial expression synthesis for an image-based talking head. In IEEE International Conference on Multimedia and Expo, Barcelona, Spain. 1--6. DOI:10.1109\/ICME.2011.6011835"},{"key":"e_1_2_2_33_1","doi-asserted-by":"crossref","unstructured":"Wesley Mattheyses Lukas Latacz and Werner Verhelst. 2010. Optimized photorealistic audiovisual speech synthesis using active appearance modeling. In Auditory-Visual Speech Processing. 8\u20131. Wesley Mattheyses Lukas Latacz and Werner Verhelst. 2010. Optimized photorealistic audiovisual speech synthesis using active appearance modeling. In Auditory-Visual Speech Processing. 8\u20131.","DOI":"10.1145\/1924035.1924042"},{"key":"e_1_2_2_34_1","volume-title":"International Conference on Machine Learning. 1310\u20131318","author":"Pascanu Razvan","year":"2013","unstructured":"Razvan Pascanu , Tomas Mikolov , and Yoshua Bengio . 2013 . On the difficulty of training recurrent neural networks . In International Conference on Machine Learning. 1310\u20131318 . Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning. 1310\u20131318."},{"key":"e_1_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/2807442.2807502"},{"issue":"2020","key":"e_1_2_2_36_1","first-page":"698","article-title":"GANimation: One-shot anatomically consistent facial animation","volume":"128","author":"Pumarola A.","year":"2019","unstructured":"A. Pumarola , A. Agudo , A.M. Martinez , A. Sanfeliu , and F. Moreno-Noguer . 2019 . GANimation: One-shot anatomically consistent facial animation . Int J. Comput. Vis. 128 ( 2020 ), 698 -- 713 . https:\/\/doi.org\/10.1007\/s11263-019-01210-3 10.1007\/s11263-019-01210-3 A. Pumarola, A. Agudo, A.M. Martinez, A. Sanfeliu, and F. Moreno-Noguer. 2019. GANimation: One-shot anatomically consistent facial animation. Int J. Comput. Vis. 128 (2020), 698--713. https:\/\/doi.org\/10.1007\/s11263-019-01210-3","journal-title":"Int J. Comput. Vis."},{"key":"e_1_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/2501988.2501993"},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2019\/129"},{"key":"e_1_2_2_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073640"},{"key":"e_1_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.14022"},{"key":"e_1_2_2_41_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58517-4_42"},{"key":"e_1_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/2929464.2929475"},{"key":"e_1_2_2_43_1","unstructured":"A\u00e4ron van den Oord Sander Dieleman Heiga Zen Karen Simonyan Oriol Vinyals Alexander Graves Nal Kalchbrenner Andrew Senior and Koray Kavukcuoglu. 2016. WaveNet: A generative model for raw audio. In Arxiv. https:\/\/arxiv.org\/abs\/1609.03499 A\u00e4ron van den Oord Sander Dieleman Heiga Zen Karen Simonyan Oriol Vinyals Alexander Graves Nal Kalchbrenner Andrew Senior and Koray Kavukcuoglu. 2016. WaveNet: A generative model for raw audio. In Arxiv. https:\/\/arxiv.org\/abs\/1609.03499"},{"key":"e_1_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/1073204.1073209"},{"key":"e_1_2_2_45_1","unstructured":"Konstantinos Vougioukas Stavros Petridis and Maja Pantic. 2018. End-to-End Speech-Driven Facial Animation with Temporal GANs. arxiv:eess.AS\/1805.09313 Konstantinos Vougioukas Stavros Petridis and Maja Pantic. 2018. End-to-End Speech-Driven Facial Animation with Temporal GANs. arxiv:eess.AS\/1805.09313"},{"key":"e_1_2_2_46_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-019-01251-8"},{"key":"e_1_2_2_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/321796.321811"},{"key":"e_1_2_2_48_1","volume-title":"INTERSPEECH 2011 (interspeech 2011 ed.). International Speech Communication Association. https:\/\/www.microsoft.com\/en-us\/research\/publication\/text-driven-3d-photo-realistic-talking-head\/","author":"Wang Lijuan","year":"2011","unstructured":"Lijuan Wang , Wei Han , Frank Soong , and Qiang Huo . 2011 . Text-driven 3D photo-realistic talking head . In INTERSPEECH 2011 (interspeech 2011 ed.). International Speech Communication Association. https:\/\/www.microsoft.com\/en-us\/research\/publication\/text-driven-3d-photo-realistic-talking-head\/ Lijuan Wang, Wei Han, Frank Soong, and Qiang Huo. 2011. Text-driven 3D photo-realistic talking head. In INTERSPEECH 2011 (interspeech 2011 ed.). International Speech Communication Association. https:\/\/www.microsoft.com\/en-us\/research\/publication\/text-driven-3d-photo-realistic-talking-head\/"},{"key":"e_1_2_2_49_1","volume-title":"European Conference on Computer Vision.","author":"Wiles O.","unstructured":"O. Wiles , A.S. Koepke , and A. Zisserman . 2018. X2Face: A network for controlling face generation by using images, audio, and pose codes . In European Conference on Computer Vision. O. Wiles, A.S. Koepke, and A. Zisserman. 2018. X2Face: A network for controlling face generation by using images, audio, and pose codes. In European Conference on Computer Vision."},{"key":"e_1_2_2_50_1","doi-asserted-by":"publisher","DOI":"10.1121\/1.2935783"},{"key":"e_1_2_2_51_1","doi-asserted-by":"crossref","unstructured":"Egor Zakharov Aliaksandra Shysheya Egor Burkov and Victor Lempitsky. 2019. Few-Shot Adversarial Learning of Realistic Neural Talking Head Models. arxiv:cs.CV\/1905.08233 Egor Zakharov Aliaksandra Shysheya Egor Burkov and Victor Lempitsky. 2019. Few-Shot Adversarial Learning of Realistic Neural Talking Head Models. arxiv:cs.CV\/1905.08233","DOI":"10.1109\/ICCV.2019.00955"},{"key":"e_1_2_2_52_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33019299"},{"key":"e_1_2_2_53_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2003.819861"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3449063","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3449063","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:17:45Z","timestamp":1750191465000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3449063"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,6,30]]},"references-count":52,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2021,6,30]]}},"alternative-id":["10.1145\/3449063"],"URL":"https:\/\/doi.org\/10.1145\/3449063","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,6,30]]},"assertion":[{"value":"2020-06-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-02-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-08-01","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}