{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,31]],"date-time":"2026-03-31T02:49:54Z","timestamp":1774925394830,"version":"3.50.1"},"reference-count":42,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2020,11,27]],"date-time":"2020-11-27T00:00:00Z","timestamp":1606435200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"NSF","award":["1813553"],"award-info":[{"award-number":["1813553"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2020,12,31]]},"abstract":"<jats:p>\n            We address the problem of reconstructing spatially-varying BRDFs from a small set of image measurements. This is a fundamentally under-constrained problem, and previous work has relied on using various regularization priors or on capturing many images to produce plausible results. In this work, we present\n            <jats:italic>MaterialGAN<\/jats:italic>\n            , a deep generative convolutional network based on StyleGAN2, trained to synthesize realistic SVBRDF parameter maps. We show that MaterialGAN can be used as a powerful material prior in an inverse rendering framework: we optimize in its latent representation to generate material maps that match the appearance of the captured images when rendered. We demonstrate this framework on the task of reconstructing SVBRDFs from images captured under flash illumination using a hand-held mobile phone. Our method succeeds in producing plausible material maps that accurately reproduce the target images, and outperforms previous state-of-the-art material capture methods in evaluations on both synthetic and real data. Furthermore, our GAN-based latent space allows for high-level semantic material editing operations such as generating material variations and material morphing.\n          <\/jats:p>","DOI":"10.1145\/3414685.3417779","type":"journal-article","created":{"date-parts":[[2020,11,27]],"date-time":"2020-11-27T21:51:05Z","timestamp":1606513865000},"page":"1-13","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":95,"title":["MaterialGAN"],"prefix":"10.1145","volume":"39","author":[{"given":"Yu","family":"Guo","sequence":"first","affiliation":[{"name":"University of California"}]},{"given":"Cameron","family":"Smith","sequence":"additional","affiliation":[{"name":"Adobe Research"}]},{"given":"Milo\u0161","family":"Ha\u0161an","sequence":"additional","affiliation":[{"name":"Adobe Research"}]},{"given":"Kalyan","family":"Sunkavalli","sequence":"additional","affiliation":[{"name":"Adobe Research"}]},{"given":"Shuang","family":"Zhao","sequence":"additional","affiliation":[{"name":"University of California"}]}],"member":"320","published-online":{"date-parts":[[2020,11,27]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"crossref","unstructured":"Rameen Abdal Yipeng Qin and Peter Wonka. 2019a. Image2StyleGAN++: How to Edit the Embedded Images? arXiv:1911.11544  Rameen Abdal Yipeng Qin and Peter Wonka. 2019a. Image2StyleGAN++: How to Edit the Embedded Images? arXiv:1911.11544","DOI":"10.1109\/CVPR42600.2020.00832"},{"key":"e_1_2_2_2_1","doi-asserted-by":"crossref","unstructured":"Rameen Abdal Yipeng Qin and Peter Wonka. 2019b. Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space? arXiv:1904.03189  Rameen Abdal Yipeng Qin and Peter Wonka. 2019b. Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space? arXiv:1904.03189","DOI":"10.1109\/ICCV.2019.00453"},{"key":"e_1_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925917"},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/2461912.2461978"},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766967"},{"key":"e_1_2_2_6_1","volume-title":"Invertible generative models for inverse problems: mitigating representation error and dataset bias. arXiv preprint arXiv:1905.11672","author":"Asim Muhammad","year":"2019","unstructured":"Muhammad Asim , Ali Ahmed , and Paul Hand . 2019. Invertible generative models for inverse problems: mitigating representation error and dataset bias. arXiv preprint arXiv:1905.11672 ( 2019 ). Muhammad Asim, Ali Ahmed, and Paul Hand. 2019. Invertible generative models for inverse problems: mitigating representation error and dataset bias. arXiv preprint arXiv:1905.11672 (2019)."},{"key":"e_1_2_2_7_1","volume-title":"Dimakis","author":"Bora Ashish","year":"2017","unstructured":"Ashish Bora , Ajil Jalal , Eric Price , and Alexandros G . Dimakis . 2017 . Compressed Sensing using Generative Models (Proceedings of Machine Learning Research) , Vol. 70 . 537--546. Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. 2017. Compressed Sensing using Generative Models (Proceedings of Machine Learning Research), Vol. 70. 537--546."},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201378"},{"key":"e_1_2_2_9_1","volume-title":"Flexible SVBRDF Capture with a Multi-Image Deep Network. Computer Graphics Forum 38, 4","author":"Deschaintre Valentin","year":"2019","unstructured":"Valentin Deschaintre , Miika Aittala , Fr\u00e9do Durand , George Drettakis , and Adrien Bousseau . 2019. Flexible SVBRDF Capture with a Multi-Image Deep Network. Computer Graphics Forum 38, 4 ( 2019 ). Valentin Deschaintre, Miika Aittala, Fr\u00e9do Durand, George Drettakis, and Adrien Bousseau. 2019. Flexible SVBRDF Capture with a Multi-Image Deep Network. Computer Graphics Forum 38, 4 (2019)."},{"key":"e_1_2_2_10_1","volume-title":"Synthesizing Audio with Generative Adversarial Networks. CoRR abs\/1802.04208","author":"Donahue Chris","year":"2018","unstructured":"Chris Donahue , Julian McAuley , and Miller Puckette . 2018. Synthesizing Audio with Generative Adversarial Networks. CoRR abs\/1802.04208 ( 2018 ). arXiv:1802.04208 Chris Donahue, Julian McAuley, and Miller Puckette. 2018. Synthesizing Audio with Generative Adversarial Networks. CoRR abs\/1802.04208 (2018). arXiv:1802.04208"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.visinf.2019.07.003"},{"key":"e_1_2_2_12_1","volume-title":"Advances in Visual Computing","author":"Francken Yannick","unstructured":"Yannick Francken , Tom Cuypers , Tom Mertens , and Philippe Bekaert . 2009. Gloss and Normal Map Acquisition of Mesostructures Using Gray Codes . In Advances in Visual Computing , Vol. 5876 . Springer , 788--798. Yannick Francken, Tom Cuypers, Tom Mertens, and Philippe Bekaert. 2009. Gloss and Normal Map Acquisition of Mesostructures Using Gray Codes. In Advances in Visual Computing, Vol. 5876. Springer, 788--798."},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323042"},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/882262.882342"},{"key":"e_1_2_2_15_1","unstructured":"Leon A. Gatys Alexander S. Ecker and Matthias Bethge. 2015. A Neural Algorithm of Artistic Style. arXiv:1508.06576  Leon A. Gatys Alexander S. Ecker and Matthias Bethge. 2015. A Neural Algorithm of Artistic Style. arXiv:1508.06576"},{"key":"e_1_2_2_16_1","volume-title":"Image Style Transfer Using Convolutional Neural Networks. In CVPR","author":"Gatys L. A.","year":"2016","unstructured":"L. A. Gatys , A. S. Ecker , and M. Bethge . 2016 . Image Style Transfer Using Convolutional Neural Networks. In CVPR 2016 . 2414--2423. L. A. Gatys, A. S. Ecker, and M. Bethge. 2016. Image Style Transfer Using Convolutional Neural Networks. In CVPR 2016. 2414--2423."},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1111\/j.1467-8659.2009.01493.x"},{"key":"e_1_2_2_18_1","unstructured":"Ian Goodfellow Jean Pouget-Abadie Mehdi Mirza Bing Xu David Warde-Farley Sherjil Ozair Aaron Courville and Yoshua Bengio. 2014a. Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27. 2672--2680.  Ian Goodfellow Jean Pouget-Abadie Mehdi Mirza Bing Xu David Warde-Farley Sherjil Ozair Aaron Courville and Yoshua Bengio. 2014a. Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27. 2672--2680."},{"key":"e_1_2_2_19_1","unstructured":"Ian Goodfellow Jean Pouget-Abadie Mehdi Mirza Bing Xu David Warde-Farley Sherjil Ozair Aaron Courville and Yoshua Bengio. 2014b. Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27. 2672--2680.  Ian Goodfellow Jean Pouget-Abadie Mehdi Mirza Bing Xu David Warde-Farley Sherjil Ozair Aaron Courville and Yoshua Bengio. 2014b. Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27. 2672--2680."},{"key":"e_1_2_2_20_1","volume-title":"Abhijeet Ghosh, Cornelia Denk, and Mashhuda Glencross.","author":"Guarnera Dar'ya","year":"2016","unstructured":"Dar'ya Guarnera , Giuseppe Claudio Guarnera , Abhijeet Ghosh, Cornelia Denk, and Mashhuda Glencross. 2016 . BRDF Representation and Acquisition. Computer Graphics Forum ( 2016). Dar'ya Guarnera, Giuseppe Claudio Guarnera, Abhijeet Ghosh, Cornelia Denk, and Mashhuda Glencross. 2016. BRDF Representation and Acquisition. Computer Graphics Forum (2016)."},{"key":"e_1_2_2_21_1","volume-title":"Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization. In ICCV","author":"Huang Xun","year":"2017","unstructured":"Xun Huang and Serge Belongie . 2017 . Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization. In ICCV 2017. Xun Huang and Serge Belongie. 2017. Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization. In ICCV 2017."},{"key":"e_1_2_2_22_1","volume-title":"Reflectance Capture Using Univariate Sampling of BRDFs. In ICCV","author":"Hui Zhuo","year":"2017","unstructured":"Zhuo Hui , Kalyan Sunkavalli , Joon-Young Lee , Sunil Hadap , Jian Wang , and Aswin C. Sankaranarayanan . 2017 . Reflectance Capture Using Univariate Sampling of BRDFs. In ICCV 2017 . Zhuo Hui, Kalyan Sunkavalli, Joon-Young Lee, Sunil Hadap, Jian Wang, and Aswin C. Sankaranarayanan. 2017. Reflectance Capture Using Univariate Sampling of BRDFs. In ICCV 2017."},{"key":"e_1_2_2_23_1","volume-title":"Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In ECCV","author":"Johnson Justin","year":"2016","unstructured":"Justin Johnson , Alexandre Alahi , and Fei-Fei Li . 2016 . Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In ECCV 2016. Justin Johnson, Alexandre Alahi, and Fei-Fei Li. 2016. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In ECCV 2016."},{"key":"e_1_2_2_24_1","volume-title":"ICLR","author":"Karras Tero","year":"2018","unstructured":"Tero Karras , Timo Aila , Samuli Laine , and Jaakko Lehtinen . 2018 a. Progressive Growing of GANs for Improved Quality, Stability, and Variation . In ICLR 2018. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2018a. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In ICLR 2018."},{"key":"e_1_2_2_25_1","doi-asserted-by":"crossref","unstructured":"Tero Karras Samuli Laine and Timo Aila. 2018b. A Style-Based Generator Architecture for Generative Adversarial Networks. arXiv:1812.04948  Tero Karras Samuli Laine and Timo Aila. 2018b. A Style-Based Generator Architecture for Generative Adversarial Networks. arXiv:1812.04948","DOI":"10.1109\/CVPR.2019.00453"},{"key":"e_1_2_2_26_1","doi-asserted-by":"crossref","unstructured":"Tero Karras Samuli Laine Miika Aittala Janne Hellsten Jaakko Lehtinen and Timo Aila. 2019. Analyzing and Improving the Image Quality of StyleGAN. arXiv:1912.04958  Tero Karras Samuli Laine Miika Aittala Janne Hellsten Jaakko Lehtinen and Timo Aila. 2019. Analyzing and Improving the Image Quality of StyleGAN. arXiv:1912.04958","DOI":"10.1109\/CVPR42600.2020.00813"},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073641"},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00568"},{"key":"e_1_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01219-9_5"},{"key":"e_1_2_2_30_1","volume-title":"Image-Based BRDF Measurement Including Human Skin. In Eurographics Workshop on Rendering.","author":"Marschner Stephen R.","unstructured":"Stephen R. Marschner , Stephen H. Westin , Eric P. F. Lafortune , Kenneth E. Torrance , and Donald P. Greenberg . 1999 . Image-Based BRDF Measurement Including Human Skin. In Eurographics Workshop on Rendering. Stephen R. Marschner, Stephen H. Westin, Eric P. F. Lafortune, Kenneth E. Torrance, and Donald P. Greenberg. 1999. Image-Based BRDF Measurement Including Human Skin. In Eurographics Workshop on Rendering."},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/882262.882343"},{"key":"e_1_2_2_32_1","volume-title":"Experimental Analysis of BRDF Models. In EGSR","author":"Ngan Addy","year":"2005","unstructured":"Addy Ngan , Fr\u00e9do Durand , and Wojciech Matusik . 2005 . Experimental Analysis of BRDF Models. In EGSR 2005. 117--226. Addy Ngan, Fr\u00e9do Durand, and Wojciech Matusik. 2005. Experimental Analysis of BRDF Models. In EGSR 2005. 117--226."},{"key":"e_1_2_2_33_1","volume-title":"Learning to regularize with a variational autoencoder for hydrologic inverse analysis. arXiv preprint arXiv:1906.02401","author":"O'Malley Daniel","year":"2019","unstructured":"Daniel O'Malley , John K Golden , and Velimir V Vesselinov . 2019. Learning to regularize with a variational autoencoder for hydrologic inverse analysis. arXiv preprint arXiv:1906.02401 ( 2019 ). Daniel O'Malley, John K Golden, and Velimir V Vesselinov. 2019. Learning to regularize with a variational autoencoder for hydrologic inverse analysis. arXiv preprint arXiv:1906.02401 (2019)."},{"key":"e_1_2_2_34_1","volume-title":"Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. CoRR abs\/1511.06434","author":"Radford Alec","year":"2015","unstructured":"Alec Radford , Luke Metz , and Soumith Chintala . 2015. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. CoRR abs\/1511.06434 ( 2015 ). arXiv:1511.06434 Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. CoRR abs\/1511.06434 (2015). arXiv:1511.06434"},{"key":"e_1_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/2010324.1964940"},{"key":"e_1_2_2_36_1","volume-title":"Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR","author":"Simonyan Karen","year":"2015","unstructured":"Karen Simonyan and Andrew Zisserman . 2015 . Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR 2015. Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR 2015."},{"key":"e_1_2_2_37_1","volume-title":"MoCoGAN: Decomposing Motion and Content for Video Generation. In CVPR","author":"Tulyakov Sergey","year":"2018","unstructured":"Sergey Tulyakov , Ming-Yu Liu , Xiaodong Yang , and Jan Kautz . 2018 . MoCoGAN: Decomposing Motion and Content for Video Generation. In CVPR 2018. Sergey Tulyakov, Ming-Yu Liu, Xiaodong Yang, and Jan Kautz. 2018. MoCoGAN: Decomposing Motion and Content for Video Generation. In CVPR 2018."},{"key":"e_1_2_2_38_1","volume-title":"Microfacet Models for Refraction Through Rough Surfaces. EGSR 2007","author":"Walter Bruce","year":"2007","unstructured":"Bruce Walter , Stephen R. Marschner , Hongsong Li , and Kenneth E. Torrance . 2007 . Microfacet Models for Refraction Through Rough Surfaces. EGSR 2007 ( 2007 ), 195--206. Bruce Walter, Stephen R. Marschner, Hongsong Li, and Kenneth E. Torrance. 2007. Microfacet Models for Refraction Through Rough Surfaces. EGSR 2007 (2007), 195--206."},{"key":"e_1_2_2_39_1","volume-title":"Szymon Rusinkiewicz, and Todd Zickler.","author":"Weyrich Tim","year":"2009","unstructured":"Tim Weyrich , Jason Lawrence , Hendrik PA Lensch , Szymon Rusinkiewicz, and Todd Zickler. 2009 . Principles of appearance acquisition and representation. Now Publishers Inc . Tim Weyrich, Jason Lawrence, Hendrik PA Lensch, Szymon Rusinkiewicz, and Todd Zickler. 2009. Principles of appearance acquisition and representation. Now Publishers Inc."},{"key":"e_1_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/2980179.2982396"},{"key":"e_1_2_2_41_1","volume-title":"The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. CoRR abs\/1801.03924","author":"Zhang Richard","year":"2018","unstructured":"Richard Zhang , Phillip Isola , Alexei A. Efros , Eli Shechtman , and Oliver Wang . 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. CoRR abs\/1801.03924 ( 2018 ). Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. CoRR abs\/1801.03924 (2018)."},{"key":"e_1_2_2_42_1","volume-title":"Efros","author":"Zhu Jun-Yan","year":"2016","unstructured":"Jun-Yan Zhu , Philipp Kr\u00e4henb\u00fchl , Eli Shechtman , and Alexei A . Efros . 2016 . Generative Visual Manipulation on the Natural Image Manifold . arXiv:1609.03552 Jun-Yan Zhu, Philipp Kr\u00e4henb\u00fchl, Eli Shechtman, and Alexei A. Efros. 2016. Generative Visual Manipulation on the Natural Image Manifold. arXiv:1609.03552"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3414685.3417779","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3414685.3417779","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3414685.3417779","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T22:03:14Z","timestamp":1750197794000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3414685.3417779"}},"subtitle":["reflectance capture using a generative SVBRDF model"],"short-title":[],"issued":{"date-parts":[[2020,11,27]]},"references-count":42,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2020,12,31]]}},"alternative-id":["10.1145\/3414685.3417779"],"URL":"https:\/\/doi.org\/10.1145\/3414685.3417779","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,11,27]]},"assertion":[{"value":"2020-11-27","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}