{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,31]],"date-time":"2026-03-31T02:41:37Z","timestamp":1774924897545,"version":"3.50.1"},"reference-count":40,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T00:00:00Z","timestamp":1731974400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2024,12,19]]},"abstract":"<jats:p>\n            Despite recent advances in multi-view hair reconstruction, achieving strand-level precision remains a significant challenge due to inherent limitations in existing capture pipelines. We introduce\n            <jats:italic>GroomCap<\/jats:italic>\n            , a novel multi-view hair capture method that reconstructs faithful and high-fidelity hair geometry without relying on external data priors. To address the limitations of conventional reconstruction algorithms, we propose a neural implicit representation for hair volume that encodes high-resolution 3D orientation and occupancy from input views. This implicit hair volume is trained with a new volumetric 3D orientation rendering algorithm, coupled with 2D orientation distribution supervision, to effectively prevent the loss of structural information caused by undesired orientation blending. We further propose a Gaussian-based hair optimization strategy to refine the traced hair strands with a novel chained Gaussian representation, utilizing direct photometric supervision from images. Our results demonstrate that\n            <jats:italic>GroomCap<\/jats:italic>\n            is able to capture high-quality hair geometries that are not only more precise and detailed than existing methods but also versatile enough for a range of applications.\n          <\/jats:p>","DOI":"10.1145\/3687768","type":"journal-article","created":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T15:46:04Z","timestamp":1732031164000},"page":"1-15","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":17,"title":["GroomCap: High-Fidelity Prior-Free Hair Capture"],"prefix":"10.1145","volume":"43","author":[{"ORCID":"https:\/\/orcid.org\/0009-0005-6189-2326","authenticated-orcid":false,"given":"Yuxiao","family":"Zhou","sequence":"first","affiliation":[{"name":"ETH Z\u00fcrich, Zurich, Switzerland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3447-0866","authenticated-orcid":false,"given":"Menglei","family":"Chai","sequence":"additional","affiliation":[{"name":"Google Inc., Los Angeles, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2879-6114","authenticated-orcid":false,"given":"Daoye","family":"Wang","sequence":"additional","affiliation":[{"name":"Google Inc., Zurich, Switzerland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1066-8955","authenticated-orcid":false,"given":"Sebastian","family":"Winberg","sequence":"additional","affiliation":[{"name":"Google Inc., Zurich, Switzerland"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-2033-4704","authenticated-orcid":false,"given":"Erroll","family":"Wood","sequence":"additional","affiliation":[{"name":"Google Inc., Cambridge, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0220-0853","authenticated-orcid":false,"given":"Kripasindhu","family":"Sarkar","sequence":"additional","affiliation":[{"name":"Google Inc., Zurich, Switzerland"}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-9324-779X","authenticated-orcid":false,"given":"Markus","family":"Gross","sequence":"additional","affiliation":[{"name":"ETH Z\u00fcrich, Zurich, Switzerland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8077-1205","authenticated-orcid":false,"given":"Thabo","family":"Beeler","sequence":"additional","affiliation":[{"name":"Google Inc., Zurich, Switzerland"}]}],"member":"320","published-online":{"date-parts":[[2024,11,19]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/2816795.2818112"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925961"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/2461912.2461990"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/2185520.2185612"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3355089.3356571"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/2366145.2366165"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601194"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766931"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/2661229.2661254"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/1618452.1618510"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3592433"},{"key":"e_1_2_1_12_1","first-page":"1","article-title":"Deep-MVSHair: Deep Hair Modeling from Sparse Views. In SIGGRAPH Asia 2022","volume":"10","author":"Kuang Zhiyi","year":"2022","unstructured":"Zhiyi Kuang, Yiyang Chen, Hongbo Fu, Kun Zhou, and Youyi Zheng. 2022. Deep-MVSHair: Deep Hair Modeling from Sparse Views. In SIGGRAPH Asia 2022. ACM, 10:1--10:8.","journal-title":"ACM"},{"key":"e_1_2_1_13_1","volume-title":"Juhyun Lee, Wan-Teh Chang, Wei Hua, Manfred Georg, and Matthias Grundmann.","author":"Lugaresi Camillo","year":"2019","unstructured":"Camillo Lugaresi, Jiuqiang Tang, Hadon Nash, Chris McClanahan, Esha Uboweja, Michael Hays, Fan Zhang, Chuo-Ling Chang, Ming Guang Yong, Juhyun Lee, Wan-Teh Chang, Wei Hua, Manfred Georg, and Matthias Grundmann. 2019. MediaPipe: A Framework for Building Perception Pipelines. CoRR abs\/1906.08172 (2019)."},{"key":"e_1_2_1_14_1","volume-title":"GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians. CoRR abs\/2402.10483","author":"Luo Haimin","year":"2024","unstructured":"Haimin Luo, Min Ouyang, Zijun Zhao, Suyi Jiang, Longwen Zhang, Qixuan Zhang, Wei Yang, Lan Xu, and Jingyi Yu. 2024. GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians. CoRR abs\/2402.10483 (2024)."},{"key":"e_1_2_1_15_1","volume-title":"CVPR","author":"Luo Linjie","year":"2012","unstructured":"Linjie Luo, Hao Li, Sylvain Paris, Thibaut Weise, Mark Pauly, and Szymon Rusinkiewicz. 2012. Multi-view hair capture using orientation fields. In CVPR 2012. 1490--1497."},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/2461912.2462026"},{"key":"e_1_2_1_17_1","volume-title":"Wide-Baseline Hair Capture Using Strand-Based Refinement. In CVPR","author":"Luo Linjie","year":"2013","unstructured":"Linjie Luo, Cha Zhang, Zhengyou Zhang, and Szymon Rusinkiewicz. 2013b. Wide-Baseline Hair Capture Using Strand-Based Refinement. In CVPR 2013. 265--272."},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58452-8_24"},{"key":"e_1_2_1_19_1","volume-title":"Strand-Accurate Multi-View Hair Capture. In CVPR","author":"Nam Giljoo","year":"2019","unstructured":"Giljoo Nam, Chenglei Wu, Min H. Kim, and Yaser Sheikh. 2019. Strand-Accurate Multi-View Hair Capture. In CVPR 2019. 155--164."},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/1015706.1015784"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/1360612.1360629"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-19827-4_5"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-19827-4_5"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3272127.3275019"},{"key":"e_1_2_1_25_1","first-page":"1","article-title":"LitNeRF","volume":"2023","author":"Sarkar Kripasindhu","year":"2023","unstructured":"Kripasindhu Sarkar, Marcel C. B\u00fchler, Gengyan Li, Daoye Wang, Delio Vicini, J\u00e9r\u00e9my Riviere, Yinda Zhang, Sergio Orts-Escolano, Paulo F. U. Gotardo, Thabo Beeler, and Abhimitra Meka. 2023. LitNeRF: Intrinsic Radiance Decomposition for High-Quality View Synthesis and Relighting of Faces. In SIGGRAPH Asia 2023. 42:1--42:11.","journal-title":"In SIGGRAPH Asia"},{"key":"e_1_2_1_26_1","volume-title":"Structure-from-Motion Revisited. In CVPR","author":"Johannes","year":"2016","unstructured":"Johannes L. Sch\u00f6nberger and Jan-Michael Frahm. 2016. Structure-from-Motion Revisited. In CVPR 2016. 4104--4113."},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3592106"},{"key":"e_1_2_1_28_1","volume-title":"Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction. In ICCV 2023. 1970","author":"Sklyarova Vanessa","year":"2023","unstructured":"Vanessa Sklyarova, Jenya Chelishev, Andreea Dogaru, Igor Medvedev, Victor Lempitsky, and Egor Zakharov. 2023. Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction. In ICCV 2023. 19705--19716."},{"key":"e_1_2_1_29_1","volume-title":"EGSR","author":"Sun Tiancheng","year":"2021","unstructured":"Tiancheng Sun, Giljoo Nam, Carlos Aliaga, Christophe Hery, and Ravi Ramamoorthi. 2021. Human Hair Inverse Rendering using Multi-View Photometric data. In EGSR 2021. 179--190."},{"key":"e_1_2_1_30_1","first-page":"8641","article-title":"NeuWigs","volume":"2023","author":"Wang Ziyan","year":"2023","unstructured":"Ziyan Wang, Giljoo Nam, Tuur Stuyck, Stephen Lombardi, Chen Cao, Jason M. Saragih, Michael Zollh\u00f6fer, Jessica K. Hodgins, and Christoph Lassner. 2023. NeuWigs: A Neural Dynamic Model for Volumetric Hair Capture and Animation. In CVPR 2023. 8641--8651.","journal-title":"In CVPR"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00605"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/1073204.1073267"},{"key":"e_1_2_1_33_1","volume-title":"MonoHair: High-Fidelity Hair Modeling from a Monocular Video. CoRR abs\/2403.18356","author":"Wu Keyu","year":"2024","unstructured":"Keyu Wu, Lingchen Yang, Zhiyi Kuang, Yao Feng, Xutao Han, Yuefan Shen, Hongbo Fu, Kun Zhou, and Youyi Zheng. 2024. MonoHair: High-Fidelity Hair Modeling from a Monocular Video. CoRR abs\/2403.18356 (2024)."},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00158"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3355089.3356511"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073627"},{"key":"e_1_2_1_37_1","first-page":"205","article-title":"Modeling hair from an RGB-D camera","volume":"37","author":"Zhang Meng","year":"2018","unstructured":"Meng Zhang, Pan Wu, Hongzhi Wu, Yanlin Weng, Youyi Zheng, and Kun Zhou. 2018. Modeling hair from an RGB-D camera. ACM Trans. Graph. 37, 6 (2018), 205.","journal-title":"ACM Trans. Graph."},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.01224"},{"key":"e_1_2_1_39_1","article-title":"GroomGen: A High-Quality Generative Hair Model Using Hierarchical Latent Representations","volume":"42","author":"Zhou Yuxiao","year":"2023","unstructured":"Yuxiao Zhou, Menglei Chai, Alessandro Pepe, Markus Gross, and Thabo Beeler. 2023. GroomGen: A High-Quality Generative Hair Model Using Hierarchical Latent Representations. ACM Trans. Graph. 42, 6 (2023), 270:1--270:16.","journal-title":"ACM Trans. Graph."},{"key":"e_1_2_1_40_1","volume-title":"HairNet: Single-View Hair Reconstruction Using Convolutional Neural Networks. In ECCV","volume":"11215","author":"Zhou Yi","year":"2018","unstructured":"Yi Zhou, Liwen Hu, Jun Xing, Weikai Chen, Han-Wei Kung, Xin Tong, and Hao Li. 2018. HairNet: Single-View Hair Reconstruction Using Convolutional Neural Networks. In ECCV 2018, Vol. 11215. 249--265."}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3687768","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3687768","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:17:45Z","timestamp":1750295865000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3687768"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,11,19]]},"references-count":40,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2024,12,19]]}},"alternative-id":["10.1145\/3687768"],"URL":"https:\/\/doi.org\/10.1145\/3687768","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,11,19]]},"assertion":[{"value":"2024-11-19","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}