{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,27]],"date-time":"2026-01-27T16:23:21Z","timestamp":1769531001515,"version":"3.49.0"},"reference-count":72,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2021,12,1]],"date-time":"2021-12-01T00:00:00Z","timestamp":1638316800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2021,12]]},"abstract":"<jats:p>\n            For several decades, researchers have been advancing techniques for creating and rendering 3D digital faces, where a lot of the effort has gone into geometry and appearance capture, modeling and rendering techniques. This body of research work has largely focused on facial skin, with much less attention devoted to peripheral components like hair, eyes and the interior of the mouth. As a result, even with the best technology for facial capture and rendering, in most high-end productions a lot of artist time is still spent modeling the missing components and fine-tuning the rendering parameters to combine everything into photo-real digital renders. In this work we propose to combine incomplete, high-quality renderings showing only facial skin with recent methods for neural rendering of faces, in order to automatically and seamlessly create photo-realistic full-head portrait renders from captured data without the need for artist intervention. Our method begins with traditional face rendering, where the skin is rendered with the desired appearance, expression, viewpoint, and illumination. These skin renders are then projected into the latent space of a pre-trained neural network that can generate arbitrary photo-real face images (StyleGAN2). The result is a sequence of realistic face images that match the identity and appearance of the 3D character at the skin level, but is completed naturally with synthesized hair, eyes, inner mouth and surroundings. Notably, we present the first method for\n            <jats:italic>multi-frame consistent<\/jats:italic>\n            projection into this latent space, allowing photo-realistic rendering and preservation of the identity of the digital human over an animated performance sequence, which can depict different expressions, lighting conditions and viewpoints. Our method can be used in new face rendering pipelines and, importantly, in other deep learning applications that require large amounts of realistic training data with ground-truth 3D geometry, appearance maps, lighting, and viewpoint.\n          <\/jats:p>","DOI":"10.1145\/3478513.3480509","type":"journal-article","created":{"date-parts":[[2021,12,10]],"date-time":"2021-12-10T18:29:20Z","timestamp":1639160960000},"page":"1-14","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":18,"title":["Rendering with style"],"prefix":"10.1145","volume":"40","author":[{"given":"Prashanth","family":"Chandran","sequence":"first","affiliation":[{"name":"ETH Zurich, Switzerland"}]},{"given":"Sebastian","family":"Winberg","sequence":"additional","affiliation":[{"name":"Studios, Switzerland"}]},{"given":"Gaspard","family":"Zoss","sequence":"additional","affiliation":[{"name":"Studios, Switzerland and ETH Zurich, Switzerland"}]},{"given":"J\u00e9r\u00e9my","family":"Riviere","sequence":"additional","affiliation":[{"name":"Studios, Switzerland"}]},{"given":"Markus","family":"Gross","sequence":"additional","affiliation":[{"name":"Studios, Switzerland and ETH Zurich, Switzerland"}]},{"given":"Paulo","family":"Gotardo","sequence":"additional","affiliation":[{"name":"Studios, Switzerland"}]},{"given":"Derek","family":"Bradley","sequence":"additional","affiliation":[{"name":"Studios, Switzerland"}]}],"member":"320","published-online":{"date-parts":[[2021,12,10]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00453"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00832"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3447648"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00664"},{"key":"e_1_2_1_5_1","volume-title":"Neural Point-Based Graphics. In European Conference on Computer Vision (ECCV). Springer.","author":"Aliev Kara-Ali","year":"2020","unstructured":"Kara-Ali Aliev , Artem Sevastopolsky , Maria Kolos , Dmitry Ulyanov , and Victor Lempitsky . 2020 . Neural Point-Based Graphics. In European Conference on Computer Vision (ECCV). Springer. Kara-Ali Aliev, Artem Sevastopolsky, Maria Kolos, Dmitry Ulyanov, and Victor Lempitsky. 2020. Neural Point-Based Graphics. In European Conference on Computer Vision (ECCV). Springer."},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/1833349.1778777"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/2185520.2185613"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/2010324.1964970"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925962"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/2661229.2661285"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/1833349.1778778"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00574"},{"key":"e_1_2_1_13_1","volume-title":"Semantic Deep Face Models. In 2020 Intl. Conf. 3D Vision. 345--354","author":"Chandran Prashanth","year":"2020","unstructured":"Prashanth Chandran , Derek Bradley , Markus Gross , and Thabo Beeler . 2020 . Semantic Deep Face Models. In 2020 Intl. Conf. 3D Vision. 345--354 . Prashanth Chandran, Derek Bradley, Markus Gross, and Thabo Beeler. 2020. Semantic Deep Face Models. In 2020 Intl. Conf. 3D Vision. 345--354."},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00916"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00821"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/965161.806819"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/344779.344855"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.5555\/2383847.2383869"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459936"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.12837"},{"key":"e_1_2_1_21_1","volume-title":"Comprehensive Facial Performance Capture. Computer Graphics Forum 30, 2","author":"Fyffe Graham","year":"2011","unstructured":"Graham Fyffe , Tim Hawkins , Chris Watts , Wan-Chun Ma , and Paul Debevec . 2011. Comprehensive Facial Performance Capture. Computer Graphics Forum 30, 2 ( 2011 ). Graham Fyffe, Tim Hawkins, Chris Watts, Wan-Chun Ma, and Paul Debevec. 2011. Comprehensive Facial Performance Capture. Computer Graphics Forum 30, 2 (2011)."},{"key":"e_1_2_1_22_1","volume-title":"Proc","author":"Garbin Stephan J","unstructured":"Stephan J Garbin , Marek Kowalski , Matthew Johnson , and Jamie Shotton . 2020. High Resolution Zero-Shot Domain Adaptation of Synthetically Rendered Face Images . In Proc . ECCV. Springer , 220--236. Stephan J Garbin, Marek Kowalski, Matthew Johnson, and Jamie Shotton. 2020. High Resolution Zero-Shot Domain Adaptation of Synthetically Rendered Face Images. In Proc. ECCV. Springer, 220--236."},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/2070781.2024163"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/1457515.1409092"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.5555\/2969033.2969125"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3272127.3275073"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.103"},{"key":"e_1_2_1_28_1","volume-title":"Lin (Eds.)","volume":"33","author":"H\u00e4rk\u00f6nen Erik","year":"2020","unstructured":"Erik H\u00e4rk\u00f6nen , Aaron Hertzmann , Jaakko Lehtinen , and Sylvain Paris . 2020 . GANSpace: Discovering Interpretable GAN Controls. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H . Lin (Eds.) , Vol. 33 . Curran Associates, Inc., 9841--9850. Erik H\u00e4rk\u00f6nen, Aaron Hertzmann, Jaakko Lehtinen, and Sylvain Paris. 2020. GANSpace: Discovering Interpretable GAN Controls. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 9841--9850."},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.5555\/3128975.3129002"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601194"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766931"},{"key":"e_1_2_1_32_1","volume-title":"Jiqing Wu, and Luc Van Gool.","author":"Huang Zhiwu","year":"2017","unstructured":"Zhiwu Huang , Bernhard Kratzwald , Danda Pani Paudel , Jiqing Wu, and Luc Van Gool. 2017 . Face Translation between Images and Videos using Identity-aware CycleGAN. arXiv:1712.00971 [cs.CV] Zhiwu Huang, Bernhard Kratzwald, Danda Pani Paudel, Jiqing Wu, and Luc Van Gool. 2017. Face Translation between Images and Videos using Identity-aware CycleGAN. arXiv:1712.00971 [cs.CV]"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/1609967.1609970"},{"key":"e_1_2_1_34_1","volume-title":"Intl. Conf. Learning Representations.","author":"Karras Tero","year":"2018","unstructured":"Tero Karras , Timo Aila , Samuli Laine , and Jaakko Lehtinen . 2018 . Progressive Growing of GANs for Improved Quality, Stability, and Variation . In Intl. Conf. Learning Representations. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2018. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In Intl. Conf. Learning Representations."},{"key":"e_1_2_1_35_1","unstructured":"Tero Karras Miika Aittala Samuli Laine Erik H\u00e4rk\u00f6nen Janne Hellsten Jaakko Lehtinen and Timo Aila. 2021. Alias-Free Generative Adversarial Networks. arXiv:2106.12423 [cs.CV]  Tero Karras Miika Aittala Samuli Laine Erik H\u00e4rk\u00f6nen Janne Hellsten Jaakko Lehtinen and Timo Aila. 2021. Alias-Free Generative Adversarial Networks. arXiv:2106.12423 [cs.CV]"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00453"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00813"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58621-8_18"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00084"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00593"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323020"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459765"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/3272127.3275099"},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/3414685.3417814"},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58452-8_24"},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.13225"},{"key":"e_1_2_1_47_1","volume-title":"Proc. ICCV.","author":"Nguyen-Phuoc Thu","year":"2019","unstructured":"Thu Nguyen-Phuoc , Chuan Li , Lucas Theis , Christian Richardt , and Yong-Liang Yang . 2019 . HoloGAN: Unsupervised Learning of 3D Representations From Natural Images . In Proc. ICCV. Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. 2019. HoloGAN: Unsupervised Learning of 3D Representations From Natural Images. In Proc. ICCV."},{"key":"e_1_2_1_48_1","unstructured":"Martin Pernu\u0161 Vitomir \u0160truc and Simon Dobri\u0161ek. 2021. High Resolution Face Editing with Masked GAN Latent Code Optimization. arXiv:2103.11135 [cs.CV]  Martin Pernu\u0161 Vitomir \u0160truc and Simon Dobri\u0161ek. 2021. High Resolution Face Editing with Masked GAN Latent Code Optimization. arXiv:2103.11135 [cs.CV]"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00232"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3386569.3392464"},{"key":"e_1_2_1_51_1","volume-title":"Lin (Eds.)","volume":"33","author":"Schwarz Katja","year":"2020","unstructured":"Katja Schwarz , Yiyi Liao , Michael Niemeyer , and Andreas Geiger . 2020 . GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H . Lin (Eds.) , Vol. 33 . Curran Associates, Inc. , 20154--20166. Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. 2020. GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 20154--20166."},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00092"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2020.3034267"},{"key":"e_1_2_1_54_1","volume-title":"Closed-form factorization of latent semantics in GANs. (July","author":"Shen Yujun","year":"2020","unstructured":"Yujun Shen and Bolei Zhou . 2020. Closed-form factorization of latent semantics in GANs. (July 2020 ). arXiv:2007.06600 [cs.CV] Yujun Shen and Bolei Zhou. 2020. Closed-form factorization of latent semantics in GANs. (July 2020). arXiv:2007.06600 [cs.CV]"},{"key":"e_1_2_1_55_1","volume-title":"Very Deep Convolutional Networks for Large-Scale Image Recognition. In Intl. Conf. on Learning Representations.","author":"Simonyan Karen","year":"2015","unstructured":"Karen Simonyan and Andrew Zisserman . 2015 . Very Deep Convolutional Networks for Large-Scale Image Recognition. In Intl. Conf. on Learning Representations. Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Intl. Conf. on Learning Representations."},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00828"},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00618"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/3414685.3417803"},{"key":"e_1_2_1_59_1","volume-title":"State of the Art on Neural Rendering. Computer Graphics Forum (EG STAR 2020)","author":"Tewari A.","year":"2020","unstructured":"A. Tewari , O. Fried , J. Thies , V. Sitzmann , S. Lombardi , K. Sunkavalli , R. Martin-Brualla , T. Simon , J. Saragih , M. Nie\u00dfner , R. Pandey , S. Fanello , G. Wetzstein , J.-Y. Zhu , C. Theobalt , M. Agrawala , E. Shechtman , D. B Goldman , and M. Zollh\u00f6fer . 2020c . State of the Art on Neural Rendering. Computer Graphics Forum (EG STAR 2020) ( 2020 ). A. Tewari, O. Fried, J. Thies, V. Sitzmann, S. Lombardi, K. Sunkavalli, R. Martin-Brualla, T. Simon, J. Saragih, M. Nie\u00dfner, R. Pandey, S. Fanello, G. Wetzstein, J.-Y. Zhu, C. Theobalt, M. Agrawala, E. Shechtman, D. B Goldman, and M. Zollh\u00f6fer. 2020c. State of the Art on Neural Rendering. Computer Graphics Forum (EG STAR 2020) (2020)."},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323035"},{"key":"e_1_2_1_61_1","volume-title":"Designing an Encoder for StyleGAN Image Manipulation. arXiv preprint arXiv:2102.02766","author":"Tov Omer","year":"2021","unstructured":"Omer Tov , Yuval Alaluf , Yotam Nitzan , Or Patashnik , and Daniel Cohen-Or . 2021. Designing an Encoder for StyleGAN Image Manipulation. arXiv preprint arXiv:2102.02766 ( 2021 ). Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, and Daniel Cohen-Or. 2021. Designing an Encoder for StyleGAN Image Manipulation. arXiv preprint arXiv:2102.02766 (2021)."},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1145\/3272127.3275098"},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2020.10.081"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1145\/2980179.2980233"},{"key":"e_1_2_1_65_1","volume-title":"StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation. (Nov","author":"Wu Zongze","year":"2020","unstructured":"Zongze Wu , Dani Lischinski , and Eli Shechtman . 2020. StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation. (Nov . 2020 ). arXiv:2011.12799 [cs.CV] Zongze Wu, Dani Lischinski, and Eli Shechtman. 2020. StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation. (Nov. 2020). arXiv:2011.12799 [cs.CV]"},{"key":"e_1_2_1_66_1","unstructured":"Weihao Xia Yulun Zhang Yujiu Yang Jing-Hao Xue Bolei Zhou and Ming-Hsuan Yang. 2021. GAN Inversion: A Survey. (2021). arXiv:2101.05278 [cs.CV]  Weihao Xia Yulun Zhang Yujiu Yang Jing-Hao Xue Bolei Zhou and Ming-Hsuan Yang. 2021. GAN Inversion: A Survey. (2021). arXiv:2101.05278 [cs.CV]"},{"key":"e_1_2_1_67_1","volume-title":"BiSeNet V2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation. CoRR abs\/2004.02147","author":"Yu Changqian","year":"2020","unstructured":"Changqian Yu , Changxin Gao , Jingbo Wang , Gang Yu , Chunhua Shen , and Nong Sang . 2020. BiSeNet V2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation. CoRR abs\/2004.02147 ( 2020 ). arXiv:2004.02147 Changqian Yu, Changxin Gao, Jingbo Wang, Gang Yu, Chunhua Shen, and Nong Sang. 2020. BiSeNet V2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation. CoRR abs\/2004.02147 (2020). arXiv:2004.02147"},{"key":"e_1_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01261-8_20"},{"key":"e_1_2_1_69_1","volume-title":"Proc. CVPR.","author":"Zhang R.","unstructured":"R. Zhang , P. Isola , A. A. Efros , E. Shechtman , and O. Wang . 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric . In Proc. CVPR. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proc. CVPR."},{"key":"e_1_2_1_70_1","volume-title":"Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural Rendering. In Intl. Conf. on Learning Representations.","author":"Zhang Yuxuan","year":"2021","unstructured":"Yuxuan Zhang , Wenzheng Chen , Huan Ling , Jun Gao , Yinan Zhang , Antonio Torralba , and Sanja Fidler . 2021 . Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural Rendering. In Intl. Conf. on Learning Representations. Yuxuan Zhang, Wenzheng Chen, Huan Ling, Jun Gao, Yinan Zhang, Antonio Torralba, and Sanja Fidler. 2021. Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural Rendering. In Intl. Conf. on Learning Representations."},{"key":"e_1_2_1_71_1","unstructured":"Jiapeng Zhu Yujun Shen Deli Zhao and Bolei Zhou. 2020b. In-Domain GAN Inversion for Real Image Editing. In ECCV.  Jiapeng Zhu Yujun Shen Deli Zhao and Bolei Zhou. 2020b. In-Domain GAN Inversion for Real Image Editing. In ECCV."},{"key":"e_1_2_1_72_1","volume-title":"Improved StyleGAN Embedding: Where are the Good Latents? (Dec","author":"Zhu Peihao","year":"2020","unstructured":"Peihao Zhu , Rameen Abdal , Yipeng Qin , and Peter Wonka . 2020a. Improved StyleGAN Embedding: Where are the Good Latents? (Dec . 2020 ). arXiv:2012.09036 [cs.CV] Peihao Zhu, Rameen Abdal, Yipeng Qin, and Peter Wonka. 2020a. Improved StyleGAN Embedding: Where are the Good Latents? (Dec. 2020). arXiv:2012.09036 [cs.CV]"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3478513.3480509","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3478513.3480509","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:11:49Z","timestamp":1750191109000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3478513.3480509"}},"subtitle":["combining traditional and neural approaches for high-quality face rendering"],"short-title":[],"issued":{"date-parts":[[2021,12]]},"references-count":72,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2021,12]]}},"alternative-id":["10.1145\/3478513.3480509"],"URL":"https:\/\/doi.org\/10.1145\/3478513.3480509","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,12]]},"assertion":[{"value":"2021-12-10","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}