{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,21]],"date-time":"2026-03-21T02:16:03Z","timestamp":1774059363046,"version":"3.50.1"},"reference-count":31,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2020,11,27]],"date-time":"2020-11-27T00:00:00Z","timestamp":1606435200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/100000001","name":"U.S. National Science Foundation","doi-asserted-by":"crossref","award":["CHS-1815585"],"award-info":[{"award-number":["CHS-1815585"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2020,12,31]]},"abstract":"<jats:p>\n            We present\n            <jats:italic>MATch<\/jats:italic>\n            , a method to automatically convert photographs of material samples into production-grade procedural material models. At the core of MATch is a new library\n            <jats:italic>DiffMat<\/jats:italic>\n            that provides differentiable building blocks for constructing procedural materials, and automatic translation of large-scale procedural models, with hundreds to thousands of node parameters, into differentiable node graphs. Combining these translated node graphs with a rendering layer yields an end-to-end differentiable pipeline that maps node graph parameters to rendered images. This facilitates the use of gradient-based optimization to estimate the parameters such that the resulting material, when rendered, matches the target image appearance, as quantified by a style transfer loss. In addition, we propose a deep neural feature-based graph selection and parameter initialization method that efficiently scales to a large number of procedural graphs. We evaluate our method on both rendered synthetic materials and real materials captured as flash photographs. We demonstrate that MATch can reconstruct more accurate, general, and complex procedural materials compared to the state-of-the-art. Moreover, by producing a procedural output, we unlock capabilities such as constructing arbitrary-resolution material maps and parametrically editing the material appearance.\n          <\/jats:p>","DOI":"10.1145\/3414685.3417781","type":"journal-article","created":{"date-parts":[[2020,11,27]],"date-time":"2020-11-27T21:51:05Z","timestamp":1606513865000},"page":"1-15","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":53,"title":["Match"],"prefix":"10.1145","volume":"39","author":[{"given":"Liang","family":"Shi","sequence":"first","affiliation":[{"name":"MIT CSAIL"}]},{"given":"Beichen","family":"Li","sequence":"additional","affiliation":[{"name":"MIT CSAIL"}]},{"given":"Milo\u0161","family":"Ha\u0161an","sequence":"additional","affiliation":[{"name":"Adobe Research"}]},{"given":"Kalyan","family":"Sunkavalli","sequence":"additional","affiliation":[{"name":"Adobe Research"}]},{"given":"Tamy","family":"Boubekeur","sequence":"additional","affiliation":[{"name":"Adobe"}]},{"given":"Radomir","family":"Mech","sequence":"additional","affiliation":[{"name":"Adobe Research"}]},{"given":"Wojciech","family":"Matusik","sequence":"additional","affiliation":[{"name":"MIT CSAIL"}]}],"member":"320","published-online":{"date-parts":[[2020,11,27]]},"reference":[{"key":"e_1_2_2_1_1","unstructured":"Adobe. 2019. Substance. https:\/\/docs.substance3d.com\/sat.  Adobe. 2019. Substance. https:\/\/docs.substance3d.com\/sat."},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925917"},{"key":"e_1_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/2461912.2461978"},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766967"},{"key":"e_1_2_2_5_1","volume-title":"ACM SIGGRAPH 2012 Courses.","author":"Burley Brett","year":"2012","unstructured":"Brett Burley . 2012 . Physically-based shading at Disney . In ACM SIGGRAPH 2012 Courses. Brett Burley. 2012. Physically-based shading at Disney. In ACM SIGGRAPH 2012 Courses."},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201378"},{"key":"e_1_2_2_7_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.13765"},{"key":"e_1_2_2_8_1","volume-title":"Computer Graphics Forum","author":"Deschaintre Valentin","unstructured":"Valentin Deschaintre , George Drettakis , and Adrien Bousseau . 2020. Guided Fine-Tuning for Large-Scale Material Transfer . In Computer Graphics Forum , Vol. 39 . Wiley Online Library , 91--105. Valentin Deschaintre, George Drettakis, and Adrien Bousseau. 2020. Guided Fine-Tuning for Large-Scale Material Transfer. In Computer Graphics Forum, Vol. 39. Wiley Online Library, 91--105."},{"key":"e_1_2_2_9_1","volume-title":"Fast Graph Representation Learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds.","author":"Fey Matthias","unstructured":"Matthias Fey and Jan E. Lenssen . 2019 . Fast Graph Representation Learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds. Matthias Fey and Jan E. Lenssen. 2019. Fast Graph Representation Learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds."},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/2185520.2185569"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.13073"},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323042"},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/882262.882342"},{"key":"e_1_2_2_14_1","unstructured":"Leon A. Gatys Alexander S. Ecker and Matthias Bethge. 2015. A Neural Algorithm of Artistic Style. arXiv:cs.CV\/1508.06576  Leon A. Gatys Alexander S. Ecker and Matthias Bethge. 2015. A Neural Algorithm of Artistic Style. arXiv:cs.CV\/1508.06576"},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.265"},{"key":"e_1_2_2_16_1","volume-title":"Abhijeet Ghosh, Cornelia Denk, and Mashhuda Glencross.","author":"Guarnera Dar'ya","year":"2016","unstructured":"Dar'ya Guarnera , Giuseppe Claudio Guarnera , Abhijeet Ghosh, Cornelia Denk, and Mashhuda Glencross. 2016 . BRDF Representation and Acquisition. Computer Graphics Forum ( 2016). Dar'ya Guarnera, Giuseppe Claudio Guarnera, Abhijeet Ghosh, Cornelia Denk, and Mashhuda Glencross. 2016. BRDF Representation and Acquisition. Computer Graphics Forum (2016)."},{"key":"e_1_2_2_17_1","unstructured":"Yu Guo Milos Hasan Lingqi Yan and Shuang Zhao. 2019. A Bayesian Inference Framework for Procedural Material Parameter Estimation. arXiv:cs.GR\/1912.01067  Yu Guo Milos Hasan Lingqi Yan and Shuang Zhao. 2019. A Bayesian Inference Framework for Procedural Material Parameter Estimation. arXiv:cs.GR\/1912.01067"},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3233304"},{"key":"e_1_2_2_19_1","volume-title":"The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).","author":"Hiroharu Kato","year":"2018","unstructured":"Kato Hiroharu , Ushiku Yoshitaka , and Tatsuya Harada . 2018 . Neural 3D Mesh Renderer . In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kato Hiroharu, Ushiku Yoshitaka, and Tatsuya Harada. 2018. Neural 3D Mesh Renderer. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_20_1","volume-title":"DiffTaichi: Differentiable Programming for Physical Simulation. ICLR","author":"Hu Yuanming","year":"2020","unstructured":"Yuanming Hu , Luke Anderson , Tzu-Mao Li , Qi Sun , Nathan Carr , Jonathan Ragan-Kelley , and Fr\u00e9do Durand . 2020. DiffTaichi: Differentiable Programming for Physical Simulation. ICLR ( 2020 ). Yuanming Hu, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, and Fr\u00e9do Durand. 2020. DiffTaichi: Differentiable Programming for Physical Simulation. ICLR (2020)."},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3355089.3356490"},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3181974"},{"key":"e_1_2_2_23_1","volume-title":"Adam: A Method for Stochastic Optimization. International Conference on Learning Representations (ICLR) (12","author":"Kingma Diederik","year":"2014","unstructured":"Diederik Kingma and Jimmy Ba . 2014 . Adam: A Method for Stochastic Optimization. International Conference on Learning Representations (ICLR) (12 2014). Diederik Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations (ICLR) (12 2014)."},{"key":"e_1_2_2_24_1","unstructured":"Alex Krizhevsky Ilya Sutskever and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105.  Alex Krizhevsky Ilya Sutskever and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105."},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3272127.3275109"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01219-9_5"},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-10584-0_11"},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/3355089.3356498"},{"key":"e_1_2_2_29_1","doi-asserted-by":"crossref","unstructured":"E. Riba D. Mishkin D. Ponsa E. Rublee and G. Bradski. 2020. Kornia: an Open Source Differentiable Computer Vision Library for PyTorch. https:\/\/arxiv.org\/pdf\/1910.02190.pdf  E. Riba D. Mishkin D. Ponsa E. Rublee and G. Bradski. 2020. Kornia: an Open Source Differentiable Computer Vision Library for PyTorch. https:\/\/arxiv.org\/pdf\/1910.02190.pdf","DOI":"10.1109\/WACV45572.2020.9093363"},{"key":"e_1_2_2_30_1","volume-title":"Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations (ICLR).","author":"Simonyan Karen","year":"2015","unstructured":"Karen Simonyan and Andrew Zisserman . 2015 . Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations (ICLR). Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations (ICLR)."},{"key":"e_1_2_2_31_1","volume-title":"MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. In IEEE International Conference on Computer Vision (ICCV). 3735--3744","author":"Tewari Ayush","year":"2017","unstructured":"Ayush Tewari , Michael Zollh\u00f6fer , Hyeongwoo Kim , Pablo Garrido , Florian Bernard , Patrick P\u00e9rez , and Christian Theobalt . 2017 . MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. In IEEE International Conference on Computer Vision (ICCV). 3735--3744 . Ayush Tewari, Michael Zollh\u00f6fer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick P\u00e9rez, and Christian Theobalt. 2017. MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. In IEEE International Conference on Computer Vision (ICCV). 3735--3744."}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3414685.3417781","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3414685.3417781","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T22:03:14Z","timestamp":1750197794000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3414685.3417781"}},"subtitle":["differentiable material graphs for procedural material capture"],"short-title":[],"issued":{"date-parts":[[2020,11,27]]},"references-count":31,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2020,12,31]]}},"alternative-id":["10.1145\/3414685.3417781"],"URL":"https:\/\/doi.org\/10.1145\/3414685.3417781","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,11,27]]},"assertion":[{"value":"2020-11-27","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}