{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,21]],"date-time":"2026-03-21T02:10:49Z","timestamp":1774059049745,"version":"3.50.1"},"reference-count":62,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T00:00:00Z","timestamp":1731974400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2024,12,19]]},"abstract":"<jats:p>\n            Real-time rendering of human head avatars is a cornerstone of many computer graphics applications, such as augmented reality, video games, and films, to name a few. Recent approaches address this challenge with computationally efficient geometry primitives in a carefully calibrated multi-view setup. Albeit producing photorealistic head renderings, they often fail to represent complex motion changes, such as the mouth interior and strongly varying head poses. We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real time. At the core of our method is a hierarchical representation of head models that can capture the complex dynamics of facial expressions and head movements. First, with rich facial features extracted from raw input frames, we learn to deform the coarse facial geometry of the template mesh. We then initialize 3D Gaussians on the deformed surface and refine their positions in a fine step. We train this coarse-to-fine facial avatar model along with the head pose as learnable parameters in an end-to-end framework. This enables not only controllable facial animation via video inputs but also high-fidelity novel view synthesis of challenging facial expressions, such as tongue deformations and fine-grained teeth structure under large motion changes. Moreover, it encourages the learned head avatar to generalize towards new facial expressions and head poses at inference time. We demonstrate the performance of our method with comparisons against the related methods on different datasets, spanning challenging facial expression sequences across multiple identities. We also show the potential application of our approach by demonstrating a cross-identity facial performance transfer application. We make the code available on our\n            <jats:bold>project page.<\/jats:bold>\n          <\/jats:p>","DOI":"10.1145\/3687927","type":"journal-article","created":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T15:46:04Z","timestamp":1732031164000},"page":"1-12","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":9,"title":["GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations"],"prefix":"10.1145","volume":"43","author":[{"ORCID":"https:\/\/orcid.org\/0009-0007-6985-7159","authenticated-orcid":false,"given":"Kartik","family":"Teotia","sequence":"first","affiliation":[{"name":"Max Planck Institute for Informatics, Saarbr\u00fccken, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0858-0882","authenticated-orcid":false,"given":"Hyeongwoo","family":"Kim","sequence":"additional","affiliation":[{"name":"Imperial College London, London, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-8273-6737","authenticated-orcid":false,"given":"Pablo","family":"Garrido","sequence":"additional","affiliation":[{"name":"Flawless AI, Los Angeles, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3899-7515","authenticated-orcid":false,"given":"Marc","family":"Habermann","sequence":"additional","affiliation":[{"name":"Max Planck Institute for Informatics, Saarbr\u00fccken, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8727-0895","authenticated-orcid":false,"given":"Mohamed","family":"Elgharib","sequence":"additional","affiliation":[{"name":"Max Planck Institute for Informatics, Saarbr\u00fccken, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6104-6625","authenticated-orcid":false,"given":"Christian","family":"Theobalt","sequence":"additional","affiliation":[{"name":"Max Planck Institute for Informatics, Saarbr\u00fccken, Germany"}]}],"member":"320","published-online":{"date-parts":[[2024,11,19]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"IEEE Conf. Comput. Vis. Pattern Recog. IEEE","author":"Athar ShahRukh","year":"2022","unstructured":"ShahRukh Athar, Zexiang Xu, Kalyan Sunkavalli, Eli Shechtman, and Zhixin Shu. 2022. RigNeRF: Fully Controllable Neural 3D Portraits. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 20332--20341."},{"key":"e_1_2_1_2_1","doi-asserted-by":"crossref","unstructured":"Aayush Bansal Shugao Ma Deva Ramanan and Yaser Sheikh. 2018. Recycle-GAN: Unsupervised Video Retargeting. In ECCV.","DOI":"10.1007\/978-3-030-01228-1_8"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3528223.3530143"},{"key":"e_1_2_1_4_1","volume-title":"IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 16102--16112","author":"Chan Eric R.","year":"2022","unstructured":"Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J. Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. 2022. Efficient Geometry-aware 3D Generative Adversarial Networks. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 16102--16112."},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459936"},{"key":"e_1_2_1_6_1","volume-title":"IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 5491--5500","author":"Fridovich-Keil Sara","year":"2022","unstructured":"Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. 2022. Plenoxels: Radiance Fields without Neural Networks. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 5491--5500."},{"key":"e_1_2_1_7_1","volume-title":"IEEE Conf. Comput. Vis. Pattern Recog. IEEE.","author":"Fu Yang","year":"2024","unstructured":"Yang Fu, Sifei Liu, Amey Kulkarni, Jan Kautz, Alexei A. Efros, and Xiaolong Wang. 2024. COLMAP-Free 3D Gaussian Splatting. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE."},{"key":"e_1_2_1_8_1","volume-title":"IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 8649--8658","author":"Gafni Guy","year":"2021","unstructured":"Guy Gafni, Justus Thies, Michael Zollh\u00f6fer, and Matthias Nie\u00dfner. 2021. Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 8649--8658."},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3550454.3555501"},{"key":"e_1_2_1_10_1","volume-title":"HeadNeRF: A Realtime NeRF-based Parametric Head Model. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE","author":"Hong Yang","year":"2022","unstructured":"Yang Hong, Bo Peng, Haiyao Xiao, Ligang Liu, and Juyong Zhang. 2022. HeadNeRF: A Realtime NeRF-based Parametric Head Model. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 20342--20352."},{"key":"e_1_2_1_11_1","volume-title":"UV Gaussians: Joint Learning of Mesh Deformation and Gaussian Textures for Human Avatar Modeling. arXiv preprint arXiv:2403.11589","author":"Jiang Yujiao","year":"2024","unstructured":"Yujiao Jiang, Qingmin Liao, Xiaoyu Li, Li Ma, Qi Zhang, Chaopeng Zhang, Zongqing Lu, and Ying Shan. 2024. UV Gaussians: Joint Learning of Mesh Deformation and Gaussian Textures for Human Avatar Modeling. arXiv preprint arXiv:2403.11589 (2024)."},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/3592433"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201283"},{"key":"e_1_2_1_14_1","volume-title":"Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR)","author":"Kingma Diederik","year":"2015","unstructured":"Diederik Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR). San Diega, CA, USA."},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3592455"},{"key":"e_1_2_1_16_1","volume-title":"Chen Change Loy, and Yinda Zhang","author":"Lan Yushi","year":"2023","unstructured":"Yushi Lan, Feitong Tan, Di Qiu, Qiangeng Xu, Kyle Genova, Zeng Huang, Sean Fanello, Rohit Pandey, Thomas A. Funkhouser, Chen Change Loy, and Yinda Zhang. 2023. Gaussian3Diff: 3D Gaussian Diffusion for 3D Full Head Synthesis and Editing. CoRR abs\/2312.03763 (2023)."},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3130800.3130813"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201401"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323020"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459863"},{"key":"e_1_2_1_21_1","doi-asserted-by":"crossref","unstructured":"Jonathon Luiten Georgios Kopanas Bastian Leibe and Deva Ramanan. 2024. Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis. In 3DV.","DOI":"10.1109\/3DV62453.2024.00044"},{"key":"e_1_2_1_22_1","volume-title":"Pixel Codec Avatars. In IEEE Conf. Comput. Vis. Pattern Recog. Computer Vision Foundation \/ IEEE Computer Society, 64--73","author":"Ma Shugao","year":"2021","unstructured":"Shugao Ma, Tomas Simon, Jason M. Saragih, Dawei Wang, Yuecheng Li, Fernando De la Torre, and Yaser Sheikh. 2021. Pixel Codec Avatars. In IEEE Conf. Comput. Vis. Pattern Recog. Computer Vision Foundation \/ IEEE Computer Society, 64--73."},{"key":"e_1_2_1_23_1","unstructured":"Shengjie Ma Yanlin Weng Tianjia Shao and Kun Zhou. 2024. 3D Gaussian Blendshapes for Head Avatar Animation. In SIGGRAPH. ACM."},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459831"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3503250"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3528223.3530127"},{"key":"e_1_2_1_27_1","volume-title":"ASH: Animatable Gaussian Splats for Efficient and Photoreal Human Rendering.","author":"Pang Haokai","year":"2023","unstructured":"Haokai Pang, Heming Zhu, Adam Kortylewski, Christian Theobalt, and Marc Habermann. 2023. ASH: Animatable Gaussian Splats for Efficient and Photoreal Human Rendering. (2023). arXiv:2312.05941 [cs.CV]"},{"key":"e_1_2_1_28_1","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 1165--1175","author":"Pang Haokai","year":"2024","unstructured":"Haokai Pang, Heming Zhu, Adam Kortylewski, Christian Theobalt, and Marc Habermann. 2024. ASH: Animatable Gaussian Splats for Efficient and Photoreal Human Rendering. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 1165--1175."},{"key":"e_1_2_1_29_1","volume-title":"DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In IEEE Conf. Comput. Vis. Pattern Recog. Computer Vision Foundation \/ IEEE Computer Society, 165--174","author":"Park Jeong Joon","year":"2019","unstructured":"Jeong Joon Park, Peter Florence, Julian Straub, Richard A. Newcombe, and Steven Lovegrove. 2019. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In IEEE Conf. Comput. Vis. Pattern Recog. Computer Vision Foundation \/ IEEE Computer Society, 165--174."},{"key":"e_1_2_1_30_1","volume-title":"Nerfies: Deformable Neural Radiance Fields. In Int. Conf. Comput. Vis. IEEE, 5845--5854","author":"Park Keunhong","year":"2021","unstructured":"Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B. Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. 2021a. Nerfies: Deformable Neural Radiance Fields. In Int. Conf. Comput. Vis. IEEE, 5845--5854."},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3478513.3480487"},{"key":"e_1_2_1_32_1","volume-title":"IEEE Conf. Comput. Vis. Pattern Recog. IEEE.","author":"Qian Shenhan","year":"2024","unstructured":"Shenhan Qian, Tobias Kirschstein, Liam Schoneveld, Davide Davoli, Simon Giebenhain, and Matthias Nie\u00dfner. 2024. GaussianAvatars: Photorealistic Head Avatars with Rigged 3D Gaussians. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE."},{"key":"e_1_2_1_33_1","volume-title":"Rig3DGS: Creating Controllable Portraits from Casual Monocular Videos. CoRR abs\/2402.03723","author":"Rivero Alfredo","year":"2024","unstructured":"Alfredo Rivero, ShahRukh Athar, Zhixin Shu, and Dimitris Samaras. 2024. Rig3DGS: Creating Controllable Portraits from Casual Monocular Videos. CoRR abs\/2402.03723 (2024)."},{"key":"e_1_2_1_34_1","volume-title":"Relightable Gaussian Codec Avatars. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE.","author":"Saito Shunsuke","year":"2024","unstructured":"Shunsuke Saito, Gabriel Schwartz, Tomas Simon, Junxuan Li, and Giljoo Nam. 2024a. Relightable Gaussian Codec Avatars. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE."},{"key":"e_1_2_1_35_1","doi-asserted-by":"crossref","unstructured":"Shunsuke Saito Gabriel Schwartz Tomas Simon Junxuan Li and Giljoo Nam. 2024b. Relightable Gaussian Codec Avatars. In CVPR.","DOI":"10.1109\/CVPR52733.2024.00021"},{"key":"e_1_2_1_36_1","article-title":"Decaf: Monocular Deformation Capture for Face and Hand Interactions","volume":"42","author":"Shimada Soshi","year":"2023","unstructured":"Soshi Shimada, Vladislav Golyanik, Patrick P\u00e9rez, and Christian Theobalt. 2023. Decaf: Monocular Deformation Capture for Face and Hand Interactions. ACM Transactions on Graphics (TOG) 42, 6, Article 264 (dec 2023).","journal-title":"ACM Transactions on Graphics (TOG)"},{"key":"e_1_2_1_37_1","volume-title":"First Order Motion Model for Image Animation. In Conference on Neural Information Processing Systems (NeurIPS).","author":"Siarohin Aliaksandr","year":"2019","unstructured":"Aliaksandr Siarohin, St\u00e9phane Lathuili\u00e8re, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. 2019. First Order Motion Model for Image Animation. In Conference on Neural Information Processing Systems (NeurIPS)."},{"key":"e_1_2_1_38_1","unstructured":"Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv:1409.1556 [cs.CV]"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3649889"},{"key":"e_1_2_1_40_1","volume-title":"IEEE Conf. Comput. Vis. Pattern Recog. Computer Vision Foundation \/ IEEE Computer Society, 2549--2559","author":"Tewari Ayush","year":"2018","unstructured":"Ayush Tewari, Michael Zollh\u00f6fer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick P\u00e9rez, and Christian Theobalt. 2018. Self-Supervised Multi-Level Face Model Learning for Monocular Reconstruction at Over 250 Hz. In IEEE Conf. Comput. Vis. Pattern Recog. Computer Vision Foundation \/ IEEE Computer Society, 2549--2559."},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323035"},{"key":"e_1_2_1_42_1","volume-title":"Proc. Computer Vision and Pattern Recognition (CVPR), IEEE.","author":"Thies J.","unstructured":"J. Thies, M. Zollh\u00f6fer, M. Stamminger, C. Theobalt, and M. Nie\u00dfner. 2016. Face2Face: Real-time Face Capture and Reenactment of RGB Videos. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE."},{"key":"e_1_2_1_43_1","volume-title":"IEEE Conf. Comput. Vis. Pattern Recog. Computer Vision Foundation \/ IEEE Computer Society, 1126--1135","author":"Tran Luan","year":"2019","unstructured":"Luan Tran, Feng Liu, and Xiaoming Liu. 2019. Towards High-Fidelity Nonlinear 3D Face Morphable Model. In IEEE Conf. Comput. Vis. Pattern Recog. Computer Vision Foundation \/ IEEE Computer Society, 1126--1135."},{"key":"e_1_2_1_44_1","doi-asserted-by":"crossref","unstructured":"Alex Trevithick Matthew Chan Michael Stengel Eric R. Chan Chao Liu Zhiding Yu Sameh Khamis Manmohan Chandraker Ravi Ramamoorthi and Koki Nagano. 2023. Real-Time Radiance Fields for Single-Image Portrait View Synthesis. In ACM Transactions on Graphics (SIGGRAPH).","DOI":"10.1145\/3592460"},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1145\/3610548.3618204"},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/3588432.3591517"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00565"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2003.819861"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2003.819861"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3610548.3618136"},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00536"},{"key":"e_1_2_1_52_1","volume-title":"IEEE Conf. Comput. Vis. Pattern Recog. IEEE.","author":"Xu Yuelang","year":"2024","unstructured":"Yuelang Xu, Benwang Chen, Zhe Li, Hongwen Zhang, Lizhen Wang, Zerong Zheng, and Yebin Liu. 2024. Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE."},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3588432.3591567"},{"key":"e_1_2_1_54_1","first-page":"1","article-title":"LatentAvatar: Learning Latent Expression Code for Expressive Neural Head Avatar. In SIGGRAPH, Erik Brunvand, Alla Sheffer, and Michael Wimmer (Eds.)","volume":"86","author":"Xu Yuelang","year":"2023","unstructured":"Yuelang Xu, Hongwen Zhang, Lizhen Wang, Xiaochen Zhao, Han Huang, Guojun Qi, and Yebin Liu. 2023b. LatentAvatar: Learning Latent Expression Code for Expressive Neural Head Avatar. In SIGGRAPH, Erik Brunvand, Alla Sheffer, and Michael Wimmer (Eds.). ACM, 86:1--86:10.","journal-title":"ACM"},{"key":"e_1_2_1_55_1","first-page":"1","article-title":"Towards Practical Capture of High-Fidelity Relightable Avatars. In SIGGRAPH, June Kim, Ming C. Lin, and Bernd Bickel (Eds.)","volume":"23","author":"Yang Haotian","year":"2023","unstructured":"Haotian Yang, Mingwu Zheng, Wanquan Feng, Haibin Huang, Yu-Kun Lai, Pengfei Wan, Zhongyuan Wang, and Chongyang Ma. 2023. Towards Practical Capture of High-Fidelity Relightable Avatars. In SIGGRAPH, June Kim, Ming C. Lin, and Bernd Bickel (Eds.). ACM, 23:1--23:11.","journal-title":"ACM"},{"key":"e_1_2_1_56_1","volume-title":"IEEE Conf. Comput. Vis. Pattern Recog. Computer Vision Foundation \/ IEEE Computer Society, 586--595","author":"Zhang Richard","year":"2018","unstructured":"Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In IEEE Conf. Comput. Vis. Pattern Recog. Computer Vision Foundation \/ IEEE Computer Society, 586--595."},{"key":"e_1_2_1_57_1","article-title":"HAvatar: High-fidelity Head Avatar via Facial Model Conditioned Neural Radiance Field","volume":"43","author":"Zhao Xiaochen","year":"2024","unstructured":"Xiaochen Zhao, Lizhen Wang, Jingxiang Sun, Hongwen Zhang, Jinli Suo, and Yebin Liu. 2024. HAvatar: High-fidelity Head Avatar via Facial Model Conditioned Neural Radiance Field. ACM Trans. Graph. 43, 1 (2024), 6:1--6:16.","journal-title":"ACM Trans. Graph."},{"key":"e_1_2_1_58_1","volume-title":"IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 13535--13545","author":"Zheng Yufeng","year":"2022","unstructured":"Yufeng Zheng, Victoria Fern\u00e1ndez Abrevaya, Marcel C. B\u00fchler, Xu Chen, Michael J. Black, and Otmar Hilliges. 2022. I M Avatar: Implicit Morphable Head Avatars from Videos. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 13535--13545."},{"key":"e_1_2_1_59_1","volume-title":"IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 21057--21067","author":"Zheng Yufeng","year":"2023","unstructured":"Yufeng Zheng, Wang Yifan, Gordon Wetzstein, Michael J. Black, and Otmar Hilliges. 2023. PointAvatar: Deformable Point-Based Head Avatars from Videos. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 21057--21067."},{"key":"e_1_2_1_60_1","volume-title":"HeadStudio: Text to Animatable Head Avatars with 3D Gaussian Splatting. CoRR abs\/2402.06149","author":"Zhou Zhenglin","year":"2024","unstructured":"Zhenglin Zhou, Fan Ma, Hehe Fan, and Yi Yang. 2024. HeadStudio: Text to Animatable Head Avatars with 3D Gaussian Splatting. CoRR abs\/2402.06149 (2024)."},{"key":"e_1_2_1_61_1","volume-title":"Instant Volumetric Head Avatars. 2023 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Zielonka Wojciech","year":"2022","unstructured":"Wojciech Zielonka, Timo Bolkart, and Justus Thies. 2022. Instant Volumetric Head Avatars. 2023 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022), 4574--4584. https:\/\/api.semanticscholar.org\/CorpusID:253761096"},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2002.1021576"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3687927","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3687927","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:09:57Z","timestamp":1750295397000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3687927"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,11,19]]},"references-count":62,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2024,12,19]]}},"alternative-id":["10.1145\/3687927"],"URL":"https:\/\/doi.org\/10.1145\/3687927","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,11,19]]},"assertion":[{"value":"2024-11-19","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}