{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T15:51:54Z","timestamp":1776095514615,"version":"3.50.1"},"reference-count":113,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2024,7,19]],"date-time":"2024-07-19T00:00:00Z","timestamp":1721347200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Key R&D Program of Zhejiang","award":["2023C01047"],"award-info":[{"award-number":["2023C01047"]}]},{"DOI":"10.13039\/501100006469","name":"FDCT","doi-asserted-by":"crossref","award":["0002\/2023\/AKP"],"award-info":[{"award-number":["0002\/2023\/AKP"]}],"id":[{"id":"10.13039\/501100006469","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2024,7,19]]},"abstract":"<jats:p>\n            Recent advancements in 2D diffusion models allow appearance generation on untextured raw meshes. These methods create RGB textures by distilling a 2D diffusion model, which often contains unwanted baked-in shading effects and results in unrealistic rendering effects in the downstream applications. Generating Physically Based Rendering (PBR) materials instead of just RGB textures would be a promising solution. However, directly distilling the PBR material parameters from 2D diffusion models still suffers from incorrect material decomposition, such as baked-in shading effects in albedo. We introduce\n            <jats:italic>DreamMat<\/jats:italic>\n            , an innovative approach to resolve the aforementioned problem, to generate high-quality PBR materials from text descriptions. We find out that the main reason for the incorrect material distillation is that large-scale 2D diffusion models are only trained to generate final shading colors, resulting in insufficient constraints on material decomposition during distillation. To tackle this problem, we first finetune a new light-aware 2D diffusion model to condition on a given lighting environment and generate the shading results on this specific lighting condition. Then, by applying the same environment lights in the material distillation, DreamMat can generate high-quality PBR materials that are not only consistent with the given geometry but also free from any baked-in shading effects in albedo. Extensive experiments demonstrate that the materials produced through our methods exhibit greater visual appeal to users and achieve significantly superior rendering quality compared to baseline methods, which are preferable for downstream tasks such as game and film production.\n          <\/jats:p>","DOI":"10.1145\/3658170","type":"journal-article","created":{"date-parts":[[2024,7,19]],"date-time":"2024-07-19T14:47:57Z","timestamp":1721400477000},"page":"1-18","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":20,"title":["DreamMat: High-quality PBR Material Generation with Geometry- and Light-aware Diffusion Models"],"prefix":"10.1145","volume":"43","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8512-0551","authenticated-orcid":false,"given":"Yuqing","family":"Zhang","sequence":"first","affiliation":[{"name":"Zhejiang University, Hangzhou, China"},{"name":"State Key Lab CAD&amp;CG, Zhejiang University, ZJU-Tencent Game and Intelligent Graphics Innovation Technology Joint Lab, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2933-5667","authenticated-orcid":false,"given":"Yuan","family":"Liu","sequence":"additional","affiliation":[{"name":"Tencent Technology (Shenzhen) Co., Ltd., Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-3332-7966","authenticated-orcid":false,"given":"Zhiyu","family":"Xie","sequence":"additional","affiliation":[{"name":"Zhejiang University, Hangzhou, China"},{"name":"State Key Lab CAD&amp;CG, Zhejiang University, ZJU-Tencent Game and Intelligent Graphics Innovation Technology Joint Lab, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0009-2369-7465","authenticated-orcid":false,"given":"Lei","family":"Yang","sequence":"additional","affiliation":[{"name":"Tencent Technology (Shenzhen) Co., Ltd., ShenZhen, China"},{"name":"Bournemouth University, ShenZhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1601-0038","authenticated-orcid":false,"given":"Zhongyuan","family":"Liu","sequence":"additional","affiliation":[{"name":"Tencent Technology (Shenzhen) Co., Ltd., Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-6510-5257","authenticated-orcid":false,"given":"Mengzhou","family":"Yang","sequence":"additional","affiliation":[{"name":"Tencent Technology (Shenzhen) Co., Ltd., Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9698-0178","authenticated-orcid":false,"given":"Runze","family":"Zhang","sequence":"additional","affiliation":[{"name":"Tencent Technology (Shenzhen) Co., Ltd., Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5222-7069","authenticated-orcid":false,"given":"Qilong","family":"Kou","sequence":"additional","affiliation":[{"name":"Tencent Technology (Shenzhen) Co., Ltd., Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3335-6623","authenticated-orcid":false,"given":"Cheng","family":"Lin","sequence":"additional","affiliation":[{"name":"Tencent Technology (Shenzhen) Co., Ltd., Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2284-3952","authenticated-orcid":false,"given":"Wenping","family":"Wang","sequence":"additional","affiliation":[{"name":"Texas A&amp;M University, Texas, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7339-2920","authenticated-orcid":false,"given":"Xiaogang","family":"Jin","sequence":"additional","affiliation":[{"name":"Zhejiang University, Hangzhou, China"},{"name":"State Key Lab of CAD&amp;CG, Zhejiang University, Hangzhou, China"}]}],"member":"320","published-online":{"date-parts":[[2024,7,19]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"crossref","unstructured":"Badour AlBahar Shunsuke Saito Hung-Yu Tseng Changil Kim Johannes Kopf and Jia-Bin Huang. 2023. Single-Image 3D Human Digitization with Shape-guided Diffusion. In SIGGRAPH Asia. 1--11.","DOI":"10.1145\/3610548.3618153"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2014.2377712"},{"key":"e_1_2_2_3_1","volume-title":"Neural reflectance fields for appearance acquisition. arXiv preprint arXiv:2008.03824","author":"Bi Sai","year":"2020","unstructured":"Sai Bi, Zexiang Xu, Pratul Srinivasan, Ben Mildenhall, Kalyan Sunkavalli, Milo\u0161 Ha\u0161an, Yannick Hold-Geoffroy, David Kriegman, and Ravi Ramamoorthi. 2020a. Neural reflectance fields for appearance acquisition. arXiv preprint arXiv:2008.03824 (2020)."},{"key":"e_1_2_2_4_1","doi-asserted-by":"crossref","unstructured":"Sai Bi Zexiang Xu Kalyan Sunkavalli David Kriegman and Ravi Ramamoorthi. 2020b. Deep 3d capture: Geometry and reflectance from sparse multi-view images. In CVPR.","DOI":"10.1109\/CVPR42600.2020.00600"},{"key":"e_1_2_2_5_1","volume-title":"Nerd: Neural reflectance decomposition from image collections. In CVPR.","author":"Boss Mark","year":"2021","unstructured":"Mark Boss, Raphael Braun, Varun Jampani, Jonathan T Barron, Ce Liu, and Hendrik Lensch. 2021a. Nerd: Neural reflectance decomposition from image collections. In CVPR."},{"key":"e_1_2_2_6_1","volume-title":"Neural-pil: Neural pre-integrated lighting for reflectance decomposition. In NeurIPS.","author":"Boss Mark","year":"2021","unstructured":"Mark Boss, Varun Jampani, Raphael Braun, Ce Liu, Jonathan Barron, and Hendrik Lensch. 2021b. Neural-pil: Neural pre-integrated lighting for reflectance decomposition. In NeurIPS."},{"key":"e_1_2_2_7_1","unstructured":"Brent Burley and Walt Disney Animation Studios. 2012. Physically-based shading at disney. In SIGGRAPH."},{"key":"e_1_2_2_8_1","unstructured":"Tianshi Cao Karsten Kreis Sanja Fidler Nicholas Sharp and KangXue Yin. 2023. TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion Models. In ICCV."},{"key":"e_1_2_2_9_1","volume-title":"Scenetex: High-quality texture synthesis for indoor scenes via diffusion priors. arXiv preprint arXiv:2311.17261","author":"Chen Dave Zhenyu","year":"2023","unstructured":"Dave Zhenyu Chen, Haoxuan Li, Hsin-Ying Lee, Sergey Tulyakov, and Matthias Nie\u00dfner. 2023b. Scenetex: High-quality texture synthesis for indoor scenes via diffusion priors. arXiv preprint arXiv:2311.17261 (2023)."},{"key":"e_1_2_2_10_1","doi-asserted-by":"crossref","unstructured":"Dave Zhenyu Chen Yawar Siddiqui Hsin-Ying Lee Sergey Tulyakov and Matthias Nie\u00dfner. 2023c. Text2Tex: Text-driven Texture Synthesis via Diffusion Models. In ICCV.","DOI":"10.1109\/ICCV51070.2023.01701"},{"key":"e_1_2_2_11_1","doi-asserted-by":"crossref","unstructured":"Rui Chen Yongwei Chen Ningxin Jiao and Kui Jia. 2023a. Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation. In ICCV.","DOI":"10.1109\/ICCV51070.2023.02033"},{"key":"e_1_2_2_12_1","volume-title":"TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition. In NeurIPS.","author":"Chen Yongwei","year":"2022","unstructured":"Yongwei Chen, Rui Chen, Jiabao Lei, Yabin Zhang, and Kui Jia. 2022a. TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition. In NeurIPS."},{"key":"e_1_2_2_13_1","doi-asserted-by":"crossref","unstructured":"Ziyu Chen Chenjing Ding Jianfei Guo Dongliang Wang Yikang Li Xuan Xiao Wei Wu and Li Song. 2022b. L-Tracing: Fast Light Visibility Estimation on Neural Surfaces by Sphere Tracing. In ECCV.","DOI":"10.1007\/978-3-031-19784-0_13"},{"key":"e_1_2_2_14_1","volume-title":"Structure from Duplicates: Neural Inverse Graphics from a Pile of Objects. arXiv preprint arXiv:2401.05236","author":"Cheng Tianhang","year":"2024","unstructured":"Tianhang Cheng, Wei-Chiu Ma, Kaiyu Guan, Antonio Torralba, and Shenlong Wang. 2024. Structure from Duplicates: Neural Inverse Graphics from a Pile of Objects. arXiv preprint arXiv:2401.05236 (2024)."},{"key":"e_1_2_2_15_1","doi-asserted-by":"crossref","unstructured":"Ziang Cheng Hongdong Li Yuta Asano Yinqiang Zheng and Imari Sato. 2021. Multiview 3d reconstruction of a texture-less smooth surface of unknown generic reflectance. In CVPR.","DOI":"10.1109\/CVPR46437.2021.01596"},{"key":"e_1_2_2_16_1","doi-asserted-by":"crossref","unstructured":"Evgeniia Cheskidova Aleksandr Arganaidi Daniel-Ionut Rancea and Olaf Haag. 2023. Geometry Aware Texturing. In SIGGRAPH Asia. 1--2.","DOI":"10.1145\/3610542.3626152"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/357290.357293"},{"key":"e_1_2_2_18_1","volume-title":"Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR.","author":"Dai Angela","year":"2017","unstructured":"Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nie\u00dfner. 2017. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR."},{"key":"e_1_2_2_19_1","volume-title":"Pandora: Polarization-aided neural decomposition of radiance. In ECCV.","author":"Dave Akshat","year":"2022","unstructured":"Akshat Dave, Yongyi Zhao, and Ashok Veeraraghavan. 2022. Pandora: Polarization-aided neural decomposition of radiance. In ECCV."},{"key":"e_1_2_2_20_1","volume-title":"Objaverse: A universe of annotated 3d objects. In CVPR.","author":"Deitke Matt","year":"2023","unstructured":"Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli Vander-Bilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. 2023. Objaverse: A universe of annotated 3d objects. In CVPR."},{"key":"e_1_2_2_21_1","volume-title":"DIP: Differentiable Interreflection-aware Physics-based Inverse Rendering. arXiv preprint arXiv:2212.04705","author":"Deng Youming","year":"2022","unstructured":"Youming Deng, Xueting Li, Sifei Liu, and Ming-Hsuan Yang. 2022. DIP: Differentiable Interreflection-aware Physics-based Inverse Rendering. arXiv preprint arXiv:2212.04705 (2022)."},{"key":"e_1_2_2_22_1","doi-asserted-by":"crossref","unstructured":"Valentin Deschaintre Yiming Lin and Abhijeet Ghosh. 2021. Deep polarization imaging for 3D shape and SVBRDF acquisition. In CVPR.","DOI":"10.1109\/CVPR46437.2021.01531"},{"key":"e_1_2_2_23_1","first-page":"1","article-title":"Deep inverse rendering for high-resolution SVBRDF estimation from an arbitrary number of images","volume":"38","author":"Pieter Peers Xiao Li DUAN GAO","year":"2019","unstructured":"Xiao Li DUAN GAO, Pieter Peers, Kun Xu, and Xin Tong. 2019. Deep inverse rendering for high-resolution SVBRDF estimation from an arbitrary number of images. ACM Transactions on Graphics (ToG) 38, 4 (2019), 1--15.","journal-title":"ACM Transactions on Graphics (ToG)"},{"key":"e_1_2_2_24_1","volume-title":"Relightable 3D Gaussian: Real-time Point Cloud Relighting with BRDF Decomposition and Ray Tracing. arXiv preprint arXiv:2311.16043","author":"Gao Jian","year":"2023","unstructured":"Jian Gao, Chun Gu, Youtian Lin, Hao Zhu, Xun Cao, Li Zhang, and Yao Yao. 2023. Relightable 3D Gaussian: Real-time Point Cloud Relighting with BRDF Decomposition and Ray Tracing. arXiv preprint arXiv:2311.16043 (2023)."},{"key":"e_1_2_2_25_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3414685.3417779","article-title":"MaterialGAN: reflectance capture using a generative SVBRDF model","volume":"39","author":"Guo Yu","year":"2020","unstructured":"Yu Guo, Cameron Smith, Milo\u0161 Ha\u0161an, Kalyan Sunkavalli, and Shuang Zhao. 2020. MaterialGAN: reflectance capture using a generative SVBRDF model. ACM Transactions on Graphics (ToG) 39, 6 (2020), 1--13.","journal-title":"ACM Transactions on Graphics (ToG)"},{"key":"e_1_2_2_26_1","unstructured":"Yuan-Chen Guo Ying-Tian Liu Ruizhi Shao Christian Laforte Vikram Voleti Guan Luo Chia-Hao Chen Zi-Xin Zou Chen Wang Yan-Pei Cao and Song-Hai Zhang. 2023. threestudio: A unified framework for 3D content generation. https:\/\/github.com\/threestudio-project\/threestudio."},{"key":"e_1_2_2_27_1","unstructured":"Jon Hasselgren Nikolai Hofmann and Jacob Munkberg. 2022. Shape light and material decomposition from images using Monte Carlo rendering and denoising. NeurIPS."},{"key":"e_1_2_2_28_1","volume-title":"Ronan Le Bras, and Yejin Choi","author":"Hessel Jack","year":"2021","unstructured":"Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. 2021. CLIPScore: A Reference-free Evaluation Metric for Image Captioning. In EMNLP."},{"key":"e_1_2_2_29_1","volume-title":"Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30","author":"Heusel Martin","year":"2017","unstructured":"Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017)."},{"key":"e_1_2_2_30_1","unstructured":"Jonathan Ho Ajay Jain and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. In NeurIPS."},{"key":"e_1_2_2_31_1","volume-title":"Text2room: Extracting textured 3d meshes from 2d text-to-image models. arXiv preprint arXiv:2303.11989","author":"H\u00f6llein Lukas","year":"2023","unstructured":"Lukas H\u00f6llein, Ang Cao, Andrew Owens, Justin Johnson, and Matthias Nie\u00dfner. 2023. Text2room: Extracting textured 3d meshes from 2d text-to-image models. arXiv preprint arXiv:2303.11989 (2023)."},{"key":"e_1_2_2_32_1","volume-title":"Humannorm: Learning normal diffusion model for high-quality and realistic 3d human generation. arXiv preprint arXiv:2310.01406","author":"Huang Xin","year":"2023","unstructured":"Xin Huang, Ruizhi Shao, Qi Zhang, Hongwen Zhang, Ying Feng, Yebin Liu, and Qing Wang. 2023. Humannorm: Learning normal diffusion model for high-quality and realistic 3d human generation. arXiv preprint arXiv:2310.01406 (2023)."},{"key":"e_1_2_2_33_1","volume-title":"GaussianShader: 3D Gaussian Splatting with Shading Functions for Reflective Surfaces. arXiv preprint arXiv:2311.17977","author":"Jiang Yingwenqi","year":"2023","unstructured":"Yingwenqi Jiang, Jiadong Tu, Yuan Liu, Xifeng Gao, Xiaoxiao Long, Wenping Wang, and Yuexin Ma. 2023. GaussianShader: 3D Gaussian Splatting with Shading Functions for Reflective Surfaces. arXiv preprint arXiv:2311.17977 (2023)."},{"key":"e_1_2_2_34_1","doi-asserted-by":"crossref","unstructured":"Haian Jin Isabella Liu Peijia Xu Xiaoshuai Zhang Songfang Han Sai Bi Xiaowei Zhou Zexiang Xu and Hao Su. 2023. TensoIR: Tensorial Inverse Rendering. In CVPR.","DOI":"10.1109\/CVPR52729.2023.00024"},{"key":"e_1_2_2_35_1","doi-asserted-by":"crossref","unstructured":"James T. Kajiya. 1986. The rendering equation. In SIGGRAPH.","DOI":"10.1145\/15922.15902"},{"key":"e_1_2_2_36_1","first-page":"1","article-title":"Real shading in unreal engine 4","volume":"4","author":"Karis Brian","year":"2013","unstructured":"Brian Karis and Epic Games. 2013. Real shading in unreal engine 4. Proc. Physically Based Shading Theory Practice 4, 3 (2013), 1.","journal-title":"Proc. Physically Based Shading Theory Practice"},{"key":"e_1_2_2_37_1","volume-title":"Noise-free score distillation. arXiv preprint arXiv:2310.17590","author":"Katzir Oren","year":"2023","unstructured":"Oren Katzir, Or Patashnik, Daniel Cohen-Or, and Dani Lischinski. 2023. Noise-free score distillation. arXiv preprint arXiv:2310.17590 (2023)."},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3592433"},{"key":"e_1_2_2_39_1","volume-title":"Consistent Mesh Diffusion. arXiv preprint arXiv:2312.00971","author":"Knodt Julian","year":"2023","unstructured":"Julian Knodt and Xifeng Gao. 2023. Consistent Mesh Diffusion. arXiv preprint arXiv:2312.00971 (2023)."},{"key":"e_1_2_2_40_1","volume-title":"Intrinsic Image Diffusion for Single-view Material Estimation. arXiv preprint arXiv:2312.12274","author":"Kocsis Peter","year":"2023","unstructured":"Peter Kocsis, Vincent Sitzmann, and Matthias Nie\u00dfner. 2023. Intrinsic Image Diffusion for Single-view Material Estimation. arXiv preprint arXiv:2312.12274 (2023)."},{"key":"e_1_2_2_41_1","doi-asserted-by":"crossref","unstructured":"Zhengfei Kuang Kyle Olszewski Menglei Chai Zeng Huang Panos Achlioptas and Sergey Tulyakov. 2022. NeROIC: Neural Rendering of Objects from Online Image Collections. In SIGGRAPH.","DOI":"10.1145\/3528223.3530177"},{"key":"e_1_2_2_42_1","first-page":"124","article-title":"Content creation for a 3D game with Maya and Unity 3D. Institute of Computer Graphics and Algorithms","volume":"6","author":"Labsch\u00fctz Matthias","year":"2011","unstructured":"Matthias Labsch\u00fctz, Katharina Kr\u00f6sl, Mariebeth Aquino, Florian Grash\u00e4ftl, and Stephanie Kohl. 2011. Content creation for a 3D game with Maya and Unity 3D. Institute of Computer Graphics and Algorithms, Vienna University of Technology 6 (2011), 124.","journal-title":"Vienna University of Technology"},{"key":"e_1_2_2_43_1","volume-title":"EucliDreamer: Fast and High-Quality Texturing for 3D Models with Stable Diffusion Depth. arXiv preprint arXiv:2311.15573","author":"Le Cindy","year":"2023","unstructured":"Cindy Le, Congrui Hetang, Ang Cao, and Yihui He. 2023. EucliDreamer: Fast and High-Quality Texturing for 3D Models with Stable Diffusion Depth. arXiv preprint arXiv:2311.15573 (2023)."},{"key":"e_1_2_2_44_1","volume-title":"NeISF: Neural Incident Stokes Field for Geometry and Material Estimation. arXiv preprint arXiv:2311.13187","author":"Li Chenhao","year":"2023","unstructured":"Chenhao Li, Taishi Ono, Takeshi Uemori, Hajime Mihara, Alexander Gatto, Hajime Nagahara, and Yuseke Moriuchi. 2023b. NeISF: Neural Incident Stokes Field for Geometry and Material Estimation. arXiv preprint arXiv:2311.13187 (2023)."},{"key":"e_1_2_2_45_1","volume-title":"BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation. In ICML.","author":"Li Junnan","year":"2022","unstructured":"Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation. In ICML."},{"key":"e_1_2_2_46_1","unstructured":"Junxuan Li and Hongdong Li. 2022. Neural Reflectance for Shape Recovery with Shadow Handling. In CVPR."},{"key":"e_1_2_2_47_1","volume-title":"SweetDreamer: Aligning Geometric Priors in 2D Diffusion for Consistent Text-to-3D. arxiv:2310.02596","author":"Li Weiyu","year":"2023","unstructured":"Weiyu Li, Rui Chen, Xuelin Chen, and Ping Tan. 2023a. SweetDreamer: Aligning Geometric Priors in 2D Diffusion for Consistent Text-to-3D. arxiv:2310.02596 (2023)."},{"key":"e_1_2_2_48_1","unstructured":"Zhengqin Li Mohammad Shafiei Ravi Ramamoorthi Kalyan Sunkavalli and Manmohan Chandraker. 2020. Inverse rendering for complex indoor scenes: Shape spatially-varying lighting and svbrdf from a single image. In CVPR."},{"key":"e_1_2_2_49_1","unstructured":"Zhengqin Li Zexiang Xu Ravi Ramamoorthi Kalyan Sunkavalli and Manmohan Chandraker. 2018. Learning to reconstruct shape and spatially-varying reflectance from a single image. In SIGGRAPH Asia."},{"key":"e_1_2_2_50_1","volume-title":"GS-IR: 3D Gaussian Splatting for Inverse Rendering. arXiv preprint arXiv:2311.16473","author":"Liang Zhihao","year":"2023","unstructured":"Zhihao Liang, Qi Zhang, Ying Feng, Ying Shan, and Kui Jia. 2023. GS-IR: 3D Gaussian Splatting for Inverse Rendering. arXiv preprint arXiv:2311.16473 (2023)."},{"key":"e_1_2_2_51_1","unstructured":"Chen-Hsuan Lin Jun Gao Luming Tang Towaki Takikawa Xiaohui Zeng Xun Huang Karsten Kreis Sanja Fidler Ming-Yu Liu and Tsung-Yi Lin. 2023. Magic3D: High-Resolution Text-to-3D Content Creation. In CVPR."},{"key":"e_1_2_2_52_1","unstructured":"Minghua Liu Chao Xu Haian Jin Linghao Chen Zexiang Xu Hao Su et al. 2023f. One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization. arXiv preprint arXiv:2306.16928 (2023)."},{"key":"e_1_2_2_53_1","volume-title":"Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick.","author":"Liu Ruoshi","year":"2023","unstructured":"Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. 2023d. Zero-1-to-3: Zero-shot one image to 3d object. In ICCV."},{"key":"e_1_2_2_54_1","volume-title":"SyncDreamer: Learning to Generate Multiview-consistent Images from a Single-view Image. arXiv preprint arXiv:2309.03453","author":"Liu Yuan","year":"2023","unstructured":"Yuan Liu, Cheng Lin, Zijiao Zeng, Xiaoxiao Long, Lingjie Liu, Taku Komura, and Wenping Wang. 2023b. SyncDreamer: Learning to Generate Multiview-consistent Images from a Single-view Image. arXiv preprint arXiv:2309.03453 (2023)."},{"key":"e_1_2_2_55_1","doi-asserted-by":"crossref","unstructured":"Yuan Liu Peng Wang Cheng Lin Xiaoxiao Long Jiepeng Wang Lingjie Liu Taku Komura and Wenping Wang. 2023c. NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images. In SIGGRAPH.","DOI":"10.1145\/3592134"},{"key":"e_1_2_2_56_1","volume-title":"Text-Guided Texturing by Synchronized Multi-View Diffusion. arXiv preprint arXiv:2311.12891","author":"Liu Yuxin","year":"2023","unstructured":"Yuxin Liu, Minshan Xie, Hanyuan Liu, and Tien-Tsin Wong. 2023e. Text-Guided Texturing by Synchronized Multi-View Diffusion. arXiv preprint arXiv:2311.12891 (2023)."},{"key":"e_1_2_2_57_1","volume-title":"UniDream: Unifying Diffusion Priors for Relightable Text-to-3D Generation. arXiv preprint arXiv:2312.08754","author":"Liu Zexiang","year":"2023","unstructured":"Zexiang Liu, Yangguang Li, Youtian Lin, Xin Yu, Sida Peng, Yan-Pei Cao, Xiaojuan Qi, Xiaoshui Huang, Ding Liang, and Wanli Ouyang. 2023a. UniDream: Unifying Diffusion Priors for Relightable Text-to-3D Generation. arXiv preprint arXiv:2312.08754 (2023)."},{"key":"e_1_2_2_58_1","volume-title":"Computer Graphics Forum","author":"Luan Fujun","unstructured":"Fujun Luan, Shuang Zhao, Kavita Bala, and Zhao Dong. 2021. Unified shape and svbrdf recovery using differentiable monte carlo rendering. In Computer Graphics Forum, Vol. 40. Wiley Online Library, 101--113."},{"key":"e_1_2_2_59_1","first-page":"1","article-title":"Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering","volume":"42","author":"Lyu Linjie","year":"2023","unstructured":"Linjie Lyu, Ayush Tewari, Marc Habermann, Shunsuke Saito, Michael Zollh\u00f6fer, Thomas Leimk\u00fchler, and Christian Theobalt. 2023. Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering. ACM Transactions on Graphics (TOG) 42, 6 (2023), 1--14.","journal-title":"ACM Transactions on Graphics (TOG)"},{"key":"e_1_2_2_60_1","unstructured":"Yiwei Ma Xiaoqing Zhang Xiaoshuai Sun Jiayi Ji Haowei Wang Guannan Jiang Weilin Zhuang and Rongrong Ji. 2023. X-Mesh: Towards Fast and Accurate Text-driven 3D Stylization via Dynamic Textual Guidance. In ICCV."},{"key":"e_1_2_2_61_1","volume-title":"Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures. arXiv preprint arXiv:2211.07600","author":"Metzer Gal","year":"2022","unstructured":"Gal Metzer, Elad Richardson, Or Patashnik, Raja Giryes, and Daniel Cohen-Or. 2022. Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures. arXiv preprint arXiv:2211.07600 (2022)."},{"key":"e_1_2_2_62_1","volume-title":"Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV.","author":"Mildenhall Ben","year":"2020","unstructured":"Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2020. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV."},{"key":"e_1_2_2_63_1","doi-asserted-by":"publisher","DOI":"10.1145\/3528223.3530127"},{"key":"e_1_2_2_64_1","doi-asserted-by":"crossref","unstructured":"Jacob Munkberg Jon Hasselgren Tianchang Shen Jun Gao Wenzheng Chen Alex Evans Thomas M\u00fcller and Sanja Fidler. 2022. Extracting Triangular 3D Models Materials and Lighting From Images. In CVPR.","DOI":"10.1109\/CVPR52688.2022.00810"},{"key":"e_1_2_2_65_1","doi-asserted-by":"publisher","DOI":"10.1145\/3272127.3275017"},{"key":"e_1_2_2_66_1","doi-asserted-by":"publisher","DOI":"10.1145\/3355089.3356498"},{"key":"e_1_2_2_67_1","volume-title":"ControlDreamer: Stylized 3D Generation with Multi-View ControlNet. arXiv preprint arXiv:2312.01129","author":"Oh Yeongtak","year":"2023","unstructured":"Yeongtak Oh, Jooyoung Choi, Yongsung Kim, Minjun Park, Chaehun Shin, and Sungroh Yoon. 2023. ControlDreamer: Stylized 3D Generation with Multi-View ControlNet. arXiv preprint arXiv:2312.01129 (2023)."},{"key":"e_1_2_2_68_1","unstructured":"Ben Poole Ajay Jain Jonathan T. Barron and Ben Mildenhall. 2022. DreamFusion: Text-to-3D using 2D Diffusion. In ICLR."},{"key":"e_1_2_2_69_1","volume-title":"Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors. arXiv preprint arXiv:2306.17843","author":"Qian Guocheng","year":"2023","unstructured":"Guocheng Qian, Jinjie Mai, Abdullah Hamdi, Jian Ren, Aliaksandr Siarohin, Bing Li, Hsin-Ying Lee, Ivan Skorokhodov, Peter Wonka, Sergey Tulyakov, and Bernard Ghanem. 2023. Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors. arXiv preprint arXiv:2306.17843 (2023)."},{"key":"e_1_2_2_70_1","volume-title":"Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.","author":"Radford Alec","year":"2021","unstructured":"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML."},{"key":"e_1_2_2_71_1","volume-title":"Texture: Text-guided texturing of 3d shapes. In SIGGRAPH.","author":"Richardson Elad","year":"2023","unstructured":"Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, and Daniel Cohen-Or. 2023. Texture: Text-guided texturing of 3d shapes. In SIGGRAPH."},{"key":"e_1_2_2_72_1","doi-asserted-by":"crossref","unstructured":"Robin Rombach Andreas Blattmann Dominik Lorenz Patrick Esser and Bj\u00f6rn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In CVPR.","DOI":"10.1109\/CVPR52688.2022.01042"},{"key":"e_1_2_2_73_1","doi-asserted-by":"crossref","unstructured":"Sam Sartor and Pieter Peers. 2023. MatFusion: A Generative Diffusion Model for SVBRDF Capture. In SIGGRAPH Asia.","DOI":"10.1145\/3610548.3618194"},{"key":"e_1_2_2_74_1","unstructured":"Sketchfab. [n. d.]. Sketchfab - The best 3D viewer on the web. https:\/\/www.sketchfab.com"},{"key":"e_1_2_2_75_1","volume-title":"Nerv: Neural reflectance and visibility fields for relighting and view synthesis. In CVPR.","author":"Srinivasan Pratul P","year":"2021","unstructured":"Pratul P Srinivasan, Boyang Deng, Xiuming Zhang, Matthew Tancik, Ben Mildenhall, and Jonathan T Barron. 2021. Nerv: Neural reflectance and visibility fields for relighting and view synthesis. In CVPR."},{"key":"e_1_2_2_76_1","doi-asserted-by":"crossref","unstructured":"Cheng Sun Guangyan Cai Zhengqin Li Kai Yan Cheng Zhang Carl Marshall Jia-Bin Huang Shuang Zhao and Zhao Dong. 2023a. Neural-PBIR reconstruction of shape material and illumination. In CVPR.","DOI":"10.1109\/ICCV51070.2023.01654"},{"key":"e_1_2_2_77_1","volume-title":"Dreamcraft3d: Hierarchical 3d generation with bootstrapped diffusion prior. arXiv preprint arXiv:2310.16818","author":"Sun Jingxiang","year":"2023","unstructured":"Jingxiang Sun, Bo Zhang, Ruizhi Shao, Lizhen Wang, Wen Liu, Zhenda Xie, and Yebin Liu. 2023b. Dreamcraft3d: Hierarchical 3d generation with bootstrapped diffusion prior. arXiv preprint arXiv:2310.16818 (2023)."},{"key":"e_1_2_2_78_1","volume-title":"DINAR: Diffusion Inpainting of Neural Textures for One-Shot Human Avatars. In ICCV.","author":"Svitov David","year":"2023","unstructured":"David Svitov, Dmitrii Gudkov, Renat Bashirov, and Victor Lempitsky. 2023. DINAR: Diffusion Inpainting of Neural Textures for One-Shot Human Avatars. In ICCV."},{"key":"e_1_2_2_79_1","unstructured":"Shitao Tang Fuyang Zhang Jiacheng Chen Peng Wang and Yasutaka Furukawa. 2023. MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion. (2023)."},{"key":"e_1_2_2_80_1","volume-title":"Text-guided High-definition Consistency Texture Model. arXiv preprint arXiv:2305.05901","author":"Tang Zhibin","year":"2023","unstructured":"Zhibin Tang and Tiantong He. 2023. Text-guided High-definition Consistency Texture Model. arXiv preprint arXiv:2305.05901 (2023)."},{"key":"e_1_2_2_81_1","unstructured":"Ayush Tewari Tianwei Yin George Cazenavette Semon Rezchikov Joshua B. Tenenbaum Fr\u00e9do Durand William T. Freeman and Vincent Sitzmann. 2023. Diffusion with Forward Models: Solving Stochastic Inverse Problems Without Direct Supervision. In NeurIPS."},{"key":"e_1_2_2_82_1","volume-title":"Ravi Ramamoorthi, and Henrik Wann Jensen.","author":"Thomson","year":"2023","unstructured":"Thomson TG, Jeppe Revall Frisvad, Ravi Ramamoorthi, and Henrik Wann Jensen. 2023. Neural BSSRDF: Object Appearance Representation Including Heterogeneous Subsurface Scattering. arXiv preprint arXiv:2312.15711 (2023)."},{"key":"e_1_2_2_83_1","volume-title":"ControlMat: A Controlled Generative Approach to Material Capture. arXiv preprint arXiv:2309.01700","author":"Vecchio Giuseppe","year":"2023","unstructured":"Giuseppe Vecchio, Rosalie Martin, Arthur Roullier, Adrien Kaiser, Romain Rouffet, Valentin Deschaintre, and Tamy Boubekeur. 2023a. ControlMat: A Controlled Generative Approach to Material Capture. arXiv preprint arXiv:2309.01700 (2023)."},{"key":"e_1_2_2_84_1","volume-title":"MatFuse: Controllable Material Generation with Diffusion Models. arXiv preprint arXiv:2308.11408","author":"Vecchio Giuseppe","year":"2023","unstructured":"Giuseppe Vecchio, Renato Sortino, Simone Palazzo, and Concetto Spampinato. 2023b. MatFuse: Controllable Material Generation with Diffusion Models. arXiv preprint arXiv:2308.11408 (2023)."},{"key":"e_1_2_2_85_1","unstructured":"Zhengyi Wang Cheng Lu Yikai Wang Fan Bao Chongxuan Li Hang Su and Jun Zhu. 2023. ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation. In NeurIPS."},{"key":"e_1_2_2_86_1","volume-title":"AnyHome: Open-Vocabulary Generation of Structured and Textured 3D Homes. arXiv preprint arXiv:2312.06644","author":"Wen Zehao","year":"2023","unstructured":"Zehao Wen, Zichen Liu, Srinath Sridhar, and Rao Fu. 2023. AnyHome: Open-Vocabulary Generation of Structured and Textured 3D Homes. arXiv preprint arXiv:2312.06644 (2023)."},{"key":"e_1_2_2_87_1","doi-asserted-by":"crossref","unstructured":"Felix Wimbauer Shangzhe Wu and Christian Rupprecht. 2022. De-rendering 3d objects in the wild. In CVPR.","DOI":"10.1109\/CVPR52688.2022.01794"},{"key":"e_1_2_2_88_1","doi-asserted-by":"publisher","DOI":"10.1145\/2980179.2980248"},{"key":"e_1_2_2_89_1","volume-title":"MATLABER: Material-Aware Text-to-3D via LAtent BRDF auto-EncodeR. arXiv preprint arXiv:2308.09278","author":"Xu Xudong","year":"2023","unstructured":"Xudong Xu, Zhaoyang Lyu, Xingang Pan, and Bo Dai. 2023. MATLABER: Material-Aware Text-to-3D via LAtent BRDF auto-EncodeR. arXiv preprint arXiv:2308.09278 (2023)."},{"key":"e_1_2_2_90_1","volume-title":"DreamSpace: Dreaming Your Room Space with Text-Driven Panoramic Texture Propagation. arXiv preprint arXiv:2310.13119","author":"Yang Bangbang","year":"2023","unstructured":"Bangbang Yang, Wenqi Dong, Lin Ma, Wenbo Hu, Xiao Liu, Zhaopeng Cui, and Yuewen Ma. 2023b. DreamSpace: Dreaming Your Room Space with Text-Driven Panoramic Texture Propagation. arXiv preprint arXiv:2310.13119 (2023)."},{"key":"e_1_2_2_91_1","doi-asserted-by":"crossref","unstructured":"Wenqi Yang Guanying Chen Chaofeng Chen Zhenfang Chen and Kwan-Yee K. Wong. 2022. PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo. In ECCV.","DOI":"10.1007\/978-3-031-19769-7_16"},{"key":"e_1_2_2_92_1","volume-title":"SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and Illumination Removal in High-Illuminance Scenes. arXiv preprint arXiv:2310.13030","author":"Yang Ziyi","year":"2023","unstructured":"Ziyi Yang, Yanzhen Chen, Xinyu Gao, Yazhen Yuan, Yu Wu, Xiaowei Zhou, and Xiaogang Jin. 2023a. SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and Illumination Removal in High-Illuminance Scenes. arXiv preprint arXiv:2310.13030 (2023)."},{"key":"e_1_2_2_93_1","volume-title":"Neilf: Neural incident light field for physically-based material estimation. In ECCV.","author":"Yao Yao","year":"2022","unstructured":"Yao Yao, Jingyang Zhang, Jingbo Liu, Yihang Qu, Tian Fang, David McKinnon, Yanghai Tsin, and Long Quan. 2022. Neilf: Neural incident light field for physically-based material estimation. In ECCV."},{"key":"e_1_2_2_94_1","unstructured":"Lior Yariv Yoni Kasten Dror Moran Meirav Galun Matan Atzmon Basri Ronen and Yaron Lipman. 2020. Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance. In NeurIPS."},{"key":"e_1_2_2_95_1","volume-title":"Intrinsicnerf: Learning intrinsic neural radiance fields for editable novel view synthesis. In ICCV.","author":"Ye Weicai","year":"2023","unstructured":"Weicai Ye, Shuo Chen, Chong Bao, Hujun Bao, Marc Pollefeys, Zhaopeng Cui, and Guofeng Zhang. 2023. Intrinsicnerf: Learning intrinsic neural radiance fields for editable novel view synthesis. In ICCV."},{"key":"e_1_2_2_96_1","unstructured":"Jounathan Young. 2021. xatlas. https:\/\/github.com\/jpcy\/xatlas.git"},{"key":"e_1_2_2_97_1","volume-title":"Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering. arXiv preprint arXiv:2312.11360","author":"Youwang Kim","year":"2023","unstructured":"Kim Youwang, Tae-Hyun Oh, and Gerard Pons-Moll. 2023. Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering. arXiv preprint arXiv:2312.11360 (2023)."},{"key":"e_1_2_2_98_1","doi-asserted-by":"crossref","unstructured":"Xin Yu Peng Dai Wenbo Li Lan Ma Zhengzhe Liu and Xiaojuan Qi. 2023a. Texture Generation on 3D Meshes with Point-UV Diffusion. In ICCV.","DOI":"10.1109\/ICCV51070.2023.00388"},{"key":"e_1_2_2_99_1","volume-title":"Text-to-3d with classifier score distillation. arXiv preprint arXiv:2310.19415","author":"Yu Xin","year":"2023","unstructured":"Xin Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Song-Hai Zhang, and Xiaojuan Qi. 2023b. Text-to-3d with classifier score distillation. arXiv preprint arXiv:2310.19415 (2023)."},{"key":"e_1_2_2_100_1","volume-title":"Text-to-3d with classifier score distillation. arXiv preprint arXiv:2310.19415","author":"Yu Xin","year":"2023","unstructured":"Xin Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Song-Hai Zhang, and Xiaojuan Qi. 2023c. Text-to-3d with classifier score distillation. arXiv preprint arXiv:2310.19415 (2023)."},{"key":"e_1_2_2_101_1","volume-title":"Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models. arXiv preprint arXiv:2312.13913","author":"Zeng Xianfang","year":"2023","unstructured":"Xianfang Zeng, Xin Chen, Zhongqi Qi, Wen Liu, Zibo Zhao, Zhibin Wang, Bin Fu, Yong Liu, and Gang Yu. 2023. Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models. arXiv preprint arXiv:2312.13913 (2023)."},{"key":"e_1_2_2_102_1","volume-title":"Repaint123: Fast and High-quality One Image to 3D Generation with Progressive Controllable 2D Repainting. arXiv preprint arXiv:2312.13271","author":"Zhang Junwu","year":"2023","unstructured":"Junwu Zhang, Zhenyu Tang, Yatian Pang, Xinhua Cheng, Peng Jin, Yida Wei, Wangbo Yu, Munan Ning, and Li Yuan. 2023b. Repaint123: Fast and High-quality One Image to 3D Generation with Progressive Controllable 2D Repainting. arXiv preprint arXiv:2312.13271 (2023)."},{"key":"e_1_2_2_103_1","volume-title":"Inter-Reflectable Light Fields for Geometry and Material Estimation. arXiv preprint arXiv:2303.17147","author":"Zhang Jingyang","year":"2023","unstructured":"Jingyang Zhang, Yao Yao, Shiwei Li, Jingbo Liu, Tian Fang, David McKinnon, Yanghai Tsin, and Long Quan. 2023c. NeILF++: Inter-Reflectable Light Fields for Geometry and Material Estimation. arXiv preprint arXiv:2303.17147 (2023)."},{"key":"e_1_2_2_104_1","volume-title":"Iron: Inverse rendering by optimizing neural sdfs and materials from photometric images. In CVPR.","author":"Zhang Kai","year":"2022","unstructured":"Kai Zhang, Fujun Luan, Zhengqi Li, and Noah Snavely. 2022a. Iron: Inverse rendering by optimizing neural sdfs and materials from photometric images. In CVPR."},{"key":"e_1_2_2_105_1","doi-asserted-by":"crossref","unstructured":"Kai Zhang Fujun Luan Qianqian Wang Kavita Bala and Noah Snavely. 2021a. PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting. In CVPR.","DOI":"10.1109\/CVPR46437.2021.00541"},{"key":"e_1_2_2_106_1","doi-asserted-by":"crossref","unstructured":"Lvmin Zhang Anyi Rao and Maneesh Agrawala. 2023a. Adding Conditional Control to Text-to-Image Diffusion Models. In ICCV.","DOI":"10.1109\/ICCV51070.2023.00355"},{"key":"e_1_2_2_107_1","doi-asserted-by":"publisher","DOI":"10.1145\/3478513.3480500"},{"key":"e_1_2_2_108_1","doi-asserted-by":"crossref","unstructured":"Yuanqing Zhang Jiaming Sun Xingyi He Huan Fu Rongfei Jia and Xiaowei Zhou. 2022b. Modeling Indirect Illumination for Inverse Rendering. In CVPR.","DOI":"10.1109\/CVPR52688.2022.01809"},{"key":"e_1_2_2_109_1","volume-title":"Polarimetric multi-view inverse rendering. TPAMI","author":"Zhao Jinyu","year":"2022","unstructured":"Jinyu Zhao, Yusuke Monno, and Masatoshi Okutomi. 2022. Polarimetric multi-view inverse rendering. TPAMI (2022)."},{"key":"e_1_2_2_110_1","doi-asserted-by":"crossref","unstructured":"Xilong Zhou Milos Hasan Valentin Deschaintre Paul Guerrero Kalyan Sunkavalli and Nima Khademi Kalantari. 2022. TileGen: Tileable Controllable Material Generation and Capture. In SIGGRAPH Asia.","DOI":"10.1145\/3550469.3555403"},{"key":"e_1_2_2_111_1","doi-asserted-by":"crossref","unstructured":"Zhizhuo Zhou and Shubham Tulsiani. 2023. SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction. In CVPR.","DOI":"10.1109\/CVPR52729.2023.01211"},{"key":"e_1_2_2_112_1","unstructured":"Jingsen Zhu Yuchi Huo Qi Ye Fujun Luan Jifan Li Dianbing Xi Lisha Wang Rui Tang Wei Hua Hujun Bao et al. 2023. I2-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs. In CVPR."},{"key":"e_1_2_2_113_1","volume-title":"HiFA: High-fidelity Text-to-3D Generation with Advanced Diffusion Guidance. arXiv preprint arXiv:2305.18766","author":"Zhu Junzhe","year":"2023","unstructured":"Junzhe Zhu and Peiye Zhuang. 2023. HiFA: High-fidelity Text-to-3D Generation with Advanced Diffusion Guidance. arXiv preprint arXiv:2305.18766 (2023)."}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3658170","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3658170","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:05:54Z","timestamp":1750291554000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3658170"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,7,19]]},"references-count":113,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2024,7,19]]}},"alternative-id":["10.1145\/3658170"],"URL":"https:\/\/doi.org\/10.1145\/3658170","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,7,19]]},"assertion":[{"value":"2024-07-19","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}