{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,6]],"date-time":"2026-05-06T16:22:24Z","timestamp":1778084544209,"version":"3.51.4"},"reference-count":95,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2017,11,20]],"date-time":"2017-11-20T00:00:00Z","timestamp":1511136000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2017,12,31]]},"abstract":"<jats:p>We present a fully automatic framework that digitizes a complete 3D head with hair from a single unconstrained image. Our system offers a practical and consumer-friendly end-to-end solution for avatar personalization in gaming and social VR applications. The reconstructed models include secondary components (eyes, teeth, tongue, and gums) and provide animation-friendly blendshapes and joint-based rigs. While the generated face is a high-quality textured mesh, we propose a versatile and efficient polygonal strips (polystrips) representation for the hair. Polystrips are suitable for an extremely wide range of hairstyles and textures and are compatible with existing game engines for real-time rendering. In addition to integrating state-of-the-art advances in facial shape modeling and appearance inference, we propose a novel single-view hair generation pipeline, based on 3D-model and texture retrieval, shape refinement, and polystrip patching optimization. The performance of our hairstyle retrieval is enhanced using a deep convolutional neural network for semantic hair attribute classification. Our generated models are visually comparable to state-of-the-art game characters designed by professional artists. For real-time settings, we demonstrate the flexibility of polystrips in handling hairstyle variations, as opposed to conventional strand-based representations. We further show the effectiveness of our approach on a large number of images taken in the wild, and how compelling avatars can be easily created by anyone.<\/jats:p>","DOI":"10.1145\/3130800.31310887","type":"journal-article","created":{"date-parts":[[2017,11,22]],"date-time":"2017-11-22T16:25:08Z","timestamp":1511367908000},"page":"1-14","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":116,"title":["Avatar digitization from a single image for real-time rendering"],"prefix":"10.1145","volume":"36","author":[{"given":"Liwen","family":"Hu","sequence":"first","affiliation":[{"name":"University of Southern California"}]},{"given":"Shunsuke","family":"Saito","sequence":"additional","affiliation":[{"name":"University of Southern California"}]},{"given":"Lingyu","family":"Wei","sequence":"additional","affiliation":[{"name":"University of Southern California"}]},{"given":"Koki","family":"Nagano","sequence":"additional","affiliation":[{"name":"Pinscreen"}]},{"given":"Jaewoo","family":"Seo","sequence":"additional","affiliation":[{"name":"Pinscreen"}]},{"given":"Jens","family":"Fursund","sequence":"additional","affiliation":[{"name":"Pinscreen"}]},{"given":"Iman","family":"Sadeghi","sequence":"additional","affiliation":[{"name":"Pinscreen"}]},{"given":"Carrie","family":"Sun","sequence":"additional","affiliation":[{"name":"Pinscreen"}]},{"given":"Yen-Chun","family":"Chen","sequence":"additional","affiliation":[{"name":"Pinscreen"}]},{"given":"Hao","family":"Li","sequence":"additional","affiliation":[{"name":"University of Southern California"}]}],"member":"320","published-online":{"date-parts":[[2017,11,20]]},"reference":[{"key":"e_1_2_2_1_1","unstructured":"Louis Bavoil and Kevin Myers. 2008. Order independent transparency with dual depth peeling. (2008).  Louis Bavoil and Kevin Myers. 2008. Order independent transparency with dual depth peeling. (2008)."},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/1778765.1778777"},{"key":"e_1_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/2185520.2185613"},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/2010324.1964970"},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925962"},{"key":"e_1_2_2_6_1","volume-title":"2007 11th IEEE International Conference on Computer Vision 00","author":"Blake Andrew","year":"2007","unstructured":"Andrew Blake , Sami Romdhani , Thomas Vetter , Brian Amberg , and Andrew Fitzgibbon . 2007 . Reconstructing High Quality Face-Surfaces using Model Based Stereo . 2007 11th IEEE International Conference on Computer Vision 00 , undefined (2007). Andrew Blake, Sami Romdhani, Thomas Vetter, Brian Amberg, and Andrew Fitzgibbon. 2007. Reconstructing High Quality Face-Surfaces using Model Based Stereo. 2007 11th IEEE International Conference on Computer Vision 00, undefined (2007)."},{"key":"e_1_2_2_7_1","volume-title":"Computer graphics forum","author":"Blanz Volker","unstructured":"Volker Blanz , Curzio Basso , Tomaso Poggio , and Thomas Vetter . 2003. Reanimating faces in images and video . In Computer graphics forum , Vol. 22 . Wiley Online Library . Volker Blanz, Curzio Basso, Tomaso Poggio, and Thomas Vetter. 2003. Reanimating faces in images and video. In Computer graphics forum, Vol. 22. Wiley Online Library."},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/311535.311556"},{"key":"e_1_2_2_9_1","volume-title":"A 3D Morphable Model Learnt From 10,000 Faces","author":"Booth James","unstructured":"James Booth , Anastasios Roussos , Stefanos Zafeiriou , Allan Ponniah , and David Dunaway . 2016. A 3D Morphable Model Learnt From 10,000 Faces . In IEEE CVPR. James Booth, Anastasios Roussos, Stefanos Zafeiriou, Allan Ponniah, and David Dunaway. 2016. A 3D Morphable Model Learnt From 10,000 Faces. In IEEE CVPR."},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/2461912.2461976"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/1778765.1778778"},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601204"},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2013.249"},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925873"},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-013-0667-3"},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/2816795.2818112"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925961"},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/2461912.2461990"},{"key":"e_1_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/2185520.2185612"},{"key":"e_1_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601211"},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2005.20"},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/34.927467"},{"key":"e_1_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2008.01.024"},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/344779.344855"},{"key":"e_1_2_2_25_1","doi-asserted-by":"crossref","unstructured":"J. Deng W. Dong R. Socher L.-J. Li K. Li and L. Fei-Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09.  J. Deng W. Dong R. Socher L.-J. Li K. Li and L. Fei-Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601133"},{"key":"e_1_2_2_27_1","unstructured":"FaceUnity. 2017. (2017). http:\/\/www.faceunity.com\/p2a-demo.mp4.  FaceUnity. 2017. (2017). http:\/\/www.faceunity.com\/p2a-demo.mp4."},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/2508363.2508380"},{"key":"e_1_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/2890493"},{"key":"e_1_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/2980179.2982419"},{"key":"e_1_2_2_31_1","volume-title":"Image style transfer using convolutional neural networks","author":"Gatys Leon A","unstructured":"Leon A Gatys , Alexander S Ecker , and Matthias Bethge . 2016. Image style transfer using convolutional neural networks . In IEEE CVPR. Leon A Gatys, Alexander S Ecker, and Matthias Bethge. 2016. Image style transfer using convolutional neural networks. In IEEE CVPR."},{"key":"e_1_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/2070781.2024163"},{"key":"e_1_2_2_33_1","volume-title":"Deep residual learning for image recognition","author":"He Kaiming","unstructured":"Kaiming He , Xiangyu Zhang , Shaoqing Ren , and Jian Sun . 2016. Deep residual learning for image recognition . In IEEE CVPR. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In IEEE CVPR."},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/2366145.2366165"},{"key":"e_1_2_2_35_1","volume-title":"Unconstrained Realtime Facial Performance Capture","author":"Hsieh Pei-Lun","unstructured":"Pei-Lun Hsieh , Chongyang Ma , Jihun Yu , and Hao Li. 2015. Unconstrained Realtime Facial Performance Capture . In IEEE CVPR. Pei-Lun Hsieh, Chongyang Ma, Jihun Yu, and Hao Li. 2015. Unconstrained Realtime Facial Performance Capture. In IEEE CVPR."},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601194"},{"key":"e_1_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766931"},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/2661229.2661254"},{"key":"e_1_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766974"},{"key":"e_1_2_2_41_1","volume-title":"Avatar SDK","year":"2017","unstructured":"itSeez3D : Avatar SDK . 2017 . (2017). https:\/\/avatarsdk.com. itSeez3D: Avatar SDK. 2017. (2017). https:\/\/avatarsdk.com."},{"key":"e_1_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/2659467.2675048"},{"key":"e_1_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/1618452.1618510"},{"key":"e_1_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/74333.74361"},{"key":"e_1_2_2_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.241"},{"key":"e_1_2_2_46_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2013.404"},{"key":"e_1_2_2_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2010.63"},{"key":"e_1_2_2_48_1","volume-title":"InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image. arXiv preprint arXiv:1703.10956","author":"Kim Hyeongwoo","year":"2017","unstructured":"Hyeongwoo Kim , Michael Zoll\u00f6fer , Ayush Tewari , Justus Thies , Christian Richardt , and Theobalt Christian . 2017. InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image. arXiv preprint arXiv:1703.10956 ( 2017 ). Hyeongwoo Kim, Michael Zoll\u00f6fer, Ayush Tewari, Justus Thies, Christian Richardt, and Theobalt Christian. 2017. InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image. arXiv preprint arXiv:1703.10956 (2017)."},{"key":"e_1_2_2_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/566654.566627"},{"key":"e_1_2_2_50_1","unstructured":"Philipp Kr\u00e4henb\u00fchl and Vladlen Koltun. 2011. Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials. In Advances in Neural Information Processing Systems.   Philipp Kr\u00e4henb\u00fchl and Vladlen Koltun. 2011. Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials. In Advances in Neural Information Processing Systems."},{"key":"e_1_2_2_51_1","doi-asserted-by":"publisher","DOI":"10.1145\/1618452.1618521"},{"key":"e_1_2_2_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3098333.3107546"},{"key":"e_1_2_2_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766939"},{"key":"e_1_2_2_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/1778765.1778769"},{"key":"e_1_2_2_55_1","doi-asserted-by":"publisher","DOI":"10.1145\/2461912.2462019"},{"key":"e_1_2_2_56_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46475-6_23"},{"key":"e_1_2_2_57_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.425"},{"key":"e_1_2_2_58_1","unstructured":"Loom.ai. 2017. (2017). http:\/\/www.loom.ai.  Loom.ai. 2017. (2017). http:\/\/www.loom.ai."},{"key":"e_1_2_2_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/2461912.2462026"},{"key":"e_1_2_2_60_1","volume-title":"The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods 47, 4","author":"Ma Debbie S.","year":"2015","unstructured":"Debbie S. Ma , Joshua Correll , and Bernd Wittenbrink . 2015. The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods 47, 4 ( 2015 ). Debbie S. Ma, Joshua Correll, and Bernd Wittenbrink. 2015. The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods 47, 4 (2015)."},{"key":"e_1_2_2_61_1","doi-asserted-by":"publisher","DOI":"10.5555\/2383847.2383873"},{"key":"e_1_2_2_62_1","doi-asserted-by":"publisher","DOI":"10.1145\/1597990.1598065"},{"key":"e_1_2_2_63_1","doi-asserted-by":"publisher","DOI":"10.1145\/2980179.2980252"},{"key":"e_1_2_2_64_1","doi-asserted-by":"publisher","DOI":"10.1145\/1360612.1360629"},{"key":"e_1_2_2_65_1","volume-title":"Parke and Keith Waters","author":"Frederic","year":"2008","unstructured":"Frederic I. Parke and Keith Waters . 2008 . Computer Facial Animation (second ed.). AK Peters Ltd . Frederic I. Parke and Keith Waters. 2008. Computer Facial Animation (second ed.). AK Peters Ltd."},{"key":"e_1_2_2_66_1","doi-asserted-by":"publisher","DOI":"10.1109\/AVSS.2009.58"},{"key":"e_1_2_2_67_1","doi-asserted-by":"publisher","DOI":"10.1145\/882262.882269"},{"key":"e_1_2_2_68_1","unstructured":"Pinscreen. 2017. (2017). http:\/\/www.pinscreen.com.  Pinscreen. 2017. (2017). http:\/\/www.pinscreen.com."},{"key":"e_1_2_2_69_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.218"},{"key":"e_1_2_2_70_1","volume-title":"Learning Detailed Face Reconstruction from a Single Image. arXiv preprint arXiv:1611.05053","author":"Richardson Elad","year":"2016","unstructured":"Elad Richardson , Matan Sela , Roy Or-El , and Ron Kimmel . 2016. Learning Detailed Face Reconstruction from a Single Image. arXiv preprint arXiv:1611.05053 ( 2016 ). Elad Richardson, Matan Sela, Roy Or-El, and Ron Kimmel. 2016. Learning Detailed Face Reconstruction from a Single Image. arXiv preprint arXiv:1611.05053 (2016)."},{"key":"e_1_2_2_71_1","doi-asserted-by":"publisher","DOI":"10.1145\/1833349.1778793"},{"key":"e_1_2_2_72_1","doi-asserted-by":"crossref","unstructured":"Shunsuke Saito Tianye Li and Hao Li. 2016. Real-Time Facial Segmentation and Performance Capture from RGB Input. In ECCV.  Shunsuke Saito Tianye Li and Hao Li. 2016. Real-Time Facial Segmentation and Performance Capture from RGB Input. In ECCV.","DOI":"10.1007\/978-3-319-46484-8_15"},{"key":"e_1_2_2_73_1","volume-title":"Photorealistic Facial Texture Inference Using Deep Neural Networks","author":"Saito Shunsuke","unstructured":"Shunsuke Saito , Lingyu Wei , Liwen Hu , Koki Nagano , and Hao Li. 2017. Photorealistic Facial Texture Inference Using Deep Neural Networks . In IEEE CVPR. Shunsuke Saito, Lingyu Wei, Liwen Hu, Koki Nagano, and Hao Li. 2017. Photorealistic Facial Texture Inference Using Deep Neural Networks. In IEEE CVPR."},{"key":"e_1_2_2_74_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-010-0380-4"},{"key":"e_1_2_2_75_1","doi-asserted-by":"publisher","DOI":"10.1145\/1360612.1360663"},{"key":"e_1_2_2_76_1","doi-asserted-by":"publisher","DOI":"10.1145\/2661229.2661290"},{"key":"e_1_2_2_77_1","unstructured":"Zhixin Shu Ersin Yumer Sunil Hadap Kalyan Sunkavalli Eli Shechtman and Dimitris Samaras. 2017. Neural Face Editing with Intrinsic Image Disentangling. (2017). arXiv:arXiv:1704.04131  Zhixin Shu Ersin Yumer Sunil Hadap Kalyan Sunkavalli Eli Shechtman and Dimitris Samaras. 2017. Neural Face Editing with Intrinsic Image Disentangling. (2017). arXiv:arXiv:1704.04131"},{"key":"e_1_2_2_78_1","doi-asserted-by":"publisher","DOI":"10.1145\/1073204.1073208"},{"key":"e_1_2_2_79_1","unstructured":"K. Simonyan and A. Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. CoRR abs\/1409.1556 (2014).  K. Simonyan and A. Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. CoRR abs\/1409.1556 (2014)."},{"key":"e_1_2_2_80_1","volume-title":"Physically-based facial modelling, analysis, and animation. The journal of visualization and computer animation 1, 2","author":"Terzopoulos Demetri","year":"1990","unstructured":"Demetri Terzopoulos and Keith Waters . 1990. Physically-based facial modelling, analysis, and animation. The journal of visualization and computer animation 1, 2 ( 1990 ). Demetri Terzopoulos and Keith Waters. 1990. Physically-based facial modelling, analysis, and animation. The journal of visualization and computer animation 1, 2 (1990)."},{"key":"e_1_2_2_81_1","volume-title":"MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. arXiv preprint arXiv:1703.10580","author":"Tewari Ayush","year":"2017","unstructured":"Ayush Tewari , Michael Zoll\u00f6fer , Hyeongwoo Kim , Pablo Garrido , Florian Bernard , Patrick Perez , and Theobalt Christian . 2017. MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. arXiv preprint arXiv:1703.10580 ( 2017 ). Ayush Tewari, Michael Zoll\u00f6fer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Perez, and Theobalt Christian. 2017. MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. arXiv preprint arXiv:1703.10580 (2017)."},{"key":"e_1_2_2_82_1","doi-asserted-by":"crossref","unstructured":"J. Thies M. Zollh\u00f6fer M. Stamminger C. Theobalt and M. Nie\u00dfner. 2016a. Face2Face: Real-time Face Capture and Reenactment of RGB Videos. In IEEE CVPR.  J. Thies M. Zollh\u00f6fer M. Stamminger C. Theobalt and M. Nie\u00dfner. 2016a. Face2Face: Real-time Face Capture and Reenactment of RGB Videos. In IEEE CVPR.","DOI":"10.1109\/CVPR.2016.262"},{"key":"e_1_2_2_83_1","volume-title":"FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality. arXiv preprint arXiv:1610.03151","author":"Thies Justus","year":"2016","unstructured":"Justus Thies , Michael Zoll\u00f6fer , Marc Stamminger , Christian Theobalt , and Matthias Nie\u00dfner . 2016b. FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality. arXiv preprint arXiv:1610.03151 ( 2016 ). Justus Thies, Michael Zoll\u00f6fer, Marc Stamminger, Christian Theobalt, and Matthias Nie\u00dfner. 2016b. FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality. arXiv preprint arXiv:1610.03151 (2016)."},{"key":"e_1_2_2_84_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2001.990517"},{"key":"e_1_2_2_85_1","doi-asserted-by":"publisher","DOI":"10.1145\/1073204.1073209"},{"key":"e_1_2_2_86_1","doi-asserted-by":"publisher","DOI":"10.1145\/2614028.2615407"},{"key":"e_1_2_2_87_1","doi-asserted-by":"publisher","DOI":"10.1145\/1531326.1531362"},{"key":"e_1_2_2_88_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2007.30"},{"key":"e_1_2_2_89_1","doi-asserted-by":"publisher","DOI":"10.1145\/2010324.1964972"},{"key":"e_1_2_2_90_1","doi-asserted-by":"publisher","DOI":"10.1145\/1599470.1599472"},{"key":"e_1_2_2_91_1","volume-title":"Hair Interpolation for Portrait Morphing. Computer Graphics Forum","author":"Weng Yanlin","year":"2013","unstructured":"Yanlin Weng , Lvdi Wang , Xiao Li , Menglei Chai , and Kun Zhou . 2013. Hair Interpolation for Portrait Morphing. Computer Graphics Forum ( 2013 ). Yanlin Weng, Lvdi Wang, Xiao Li, Menglei Chai, and Kun Zhou. 2013. Hair Interpolation for Portrait Morphing. Computer Graphics Forum (2013)."},{"key":"e_1_2_2_92_1","doi-asserted-by":"publisher","DOI":"10.1145\/2980179.2980233"},{"key":"e_1_2_2_93_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2013.75"},{"key":"e_1_2_2_94_1","doi-asserted-by":"publisher","DOI":"10.1145\/1618452.1618512"},{"key":"e_1_2_2_95_1","doi-asserted-by":"publisher","DOI":"10.1145\/1073204.1073298"},{"key":"e_1_2_2_96_1","doi-asserted-by":"publisher","DOI":"10.5555\/1888028.1888042"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3130800.31310887","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3130800.31310887","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T02:11:18Z","timestamp":1750212678000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3130800.31310887"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2017,11,20]]},"references-count":95,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2017,12,31]]}},"alternative-id":["10.1145\/3130800.31310887"],"URL":"https:\/\/doi.org\/10.1145\/3130800.31310887","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2017,11,20]]},"assertion":[{"value":"2017-11-20","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}