{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,10]],"date-time":"2025-12-10T12:43:12Z","timestamp":1765370592952,"version":"3.41.0"},"reference-count":71,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T00:00:00Z","timestamp":1731974400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["IIS-1909028"],"award-info":[{"award-number":["IIS-1909028"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2024,12,19]]},"abstract":"<jats:p>We present a novel and flexible learning-based method for generating tileable image sets. Our method goes beyond simple self-tiling, supporting sets of mutually tileable images that exhibit a high degree of diversity. To promote diversity we decouple structure from content by foregoing explicit copying of patches from an exemplar image. Instead we leverage the prior knowledge of natural images and textures embedded in large-scale pretrained diffusion models to guide tile generation constrained by exterior boundary conditions and a text prompt to specify the content. By carefully designing and selecting the exterior boundary conditions, we can reformulate the tile generation process as an inpainting problem, allowing us to directly employ existing diffusion-based inpainting models without the need to retrain a model on a custom training set. We demonstrate the flexibility and efficacy of our content-aware tile generation method on different tiling schemes, such as Wang tiles, from only a text prompt. Furthermore, we introduce a novel Dual Wang tiling scheme that provides greater texture continuity and diversity than existing Wang tile variants.<\/jats:p>","DOI":"10.1145\/3687981","type":"journal-article","created":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T15:46:04Z","timestamp":1732031164000},"page":"1-12","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["Content-aware Tile Generation using Exterior Boundary Inpainting"],"prefix":"10.1145","volume":"43","author":[{"ORCID":"https:\/\/orcid.org\/0009-0001-1915-6887","authenticated-orcid":false,"given":"Sam","family":"Sartor","sequence":"first","affiliation":[{"name":"College of William &amp; Mary, Williamsburg, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7621-9808","authenticated-orcid":false,"given":"Pieter","family":"Peers","sequence":"additional","affiliation":[{"name":"College of William &amp; Mary, Williamsburg, United States of America"}]}],"member":"320","published-online":{"date-parts":[[2024,11,19]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/1531326.1531330"},{"key":"e_1_2_1_2_1","unstructured":"Urs Bergmann Nikolay Jetchev and Roland Vollgraf. 2017. Learning texture manifolds with the Periodic Spatial GAN. In ICML. 469--477."},{"key":"e_1_2_1_3_1","doi-asserted-by":"crossref","unstructured":"Rui Chen Yongwei Chen Ningxin Jiao and Kui Jia. 2023. Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation. In ICCV.","DOI":"10.1109\/ICCV51070.2023.02033"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/882262.882265"},{"key":"e_1_2_1_5_1","first-page":"8780","article-title":"Diffusion models beat gans on image synthesis","volume":"34","author":"Dhariwal Prafulla","year":"2021","unstructured":"Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans on image synthesis. NeurIPS 34 (2021), 8780--8794.","journal-title":"NeurIPS"},{"key":"e_1_2_1_6_1","volume-title":"Proc. Siggraph","author":"Efros Alexei A","year":"2001","unstructured":"Alexei A Efros. 2001. Image Quilting for Texture Synthesis and Transfer. In Proc. Siggraph 2001. 341--346."},{"key":"e_1_2_1_7_1","first-page":"1033","article-title":"Texture synthesis by non-parametric sampling","volume":"2","author":"Efros Alexei A","year":"1999","unstructured":"Alexei A Efros and Thomas K Leung. 1999. Texture synthesis by non-parametric sampling. In CVPR, Vol. 2. 1033--1038.","journal-title":"CVPR"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3322993"},{"key":"e_1_2_1_9_1","unstructured":"Chi-Wing Fu and Man-Kang Leung. 2005. Texture Tiling on Arbitrary Topological Surfaces using Wang Tiles.. In Rendering Techniques. 99--104."},{"key":"e_1_2_1_10_1","volume-title":"Texture synthesis using convolutional neural networks. NeurIPS 28","author":"Gatys Leon","year":"2015","unstructured":"Leon Gatys, Alexander S Ecker, and Matthias Bethge. 2015. Texture synthesis using convolutional neural networks. NeurIPS 28 (2015)."},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3565516.3565525"},{"key":"e_1_2_1_12_1","volume-title":"Modulating Pretrained Diffusion Models for Multimodal Image Synthesis. In ACM SIGGRAPH 2023 Conference Proceedings. Article 35","author":"Ham Cusuh","year":"2023","unstructured":"Cusuh Ham, James Hays, Jingwan Lu, Krishna Kumar Singh, Zhifei Zhang, and Tobias Hinz. 2023. Modulating Pretrained Diffusion Models for Multimodal Image Synthesis. In ACM SIGGRAPH 2023 Conference Proceedings. Article 35, 11 pages."},{"key":"e_1_2_1_13_1","volume-title":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques. 229--238","author":"Heeger David J","year":"1995","unstructured":"David J Heeger and James R Bergen. 1995. Pyramid-based texture analysis\/synthesis. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques. 229--238."},{"key":"e_1_2_1_14_1","doi-asserted-by":"crossref","unstructured":"Eric Heitz Kenneth Vanhoey Thomas Chambon and Laurent Belcour. 2021. A sliced wasserstein loss for neural texture synthesis. In CVPR. 9412--9420.","DOI":"10.1109\/CVPR46437.2021.00929"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3478513.3480507"},{"key":"e_1_2_1_16_1","doi-asserted-by":"crossref","unstructured":"Philipp Henzler Niloy J Mitra and Tobias Ritschel. 2020. Learning a neural 3d texture space from 2d exemplars. In CVPR. 8356--8364.","DOI":"10.1109\/CVPR42600.2020.00838"},{"key":"e_1_2_1_17_1","volume-title":"Ronan Le Bras, and Yejin Choi","author":"Hessel Jack","year":"2021","unstructured":"Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. 2021. CLIPScore: A Reference-free Evaluation Metric for Image Captioning. In EMNLP."},{"key":"e_1_2_1_18_1","unstructured":"Tero Karras Miika Aittala Timo Aila and Samuli Laine. 2022. Elucidating the Design Space of Diffusion-Based Generative Models. In NeurIPS."},{"key":"e_1_2_1_19_1","volume-title":"Imagic: Text-based real image editing with diffusion models. In CVPR. 6007--6017.","author":"Kawar Bahjat","year":"2023","unstructured":"Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. 2023. Imagic: Text-based real image editing with diffusion models. In CVPR. 6007--6017."},{"key":"e_1_2_1_20_1","volume-title":"Diffusionclip: Text-guided diffusion models for robust image manipulation. In CVPR. 2426--2435.","author":"Kim Gwanghyun","year":"2022","unstructured":"Gwanghyun Kim, Taesung Kwon, and Jong Chul Ye. 2022. Diffusionclip: Text-guided diffusion models for robust image manipulation. In CVPR. 2426--2435."},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00371-015-1161-4"},{"volume-title":"Recursive Wang tiles for real-time blue noise","author":"Kopf Johannes","key":"e_1_2_1_22_1","unstructured":"Johannes Kopf, Daniel Cohen-Or, Oliver Deussen, and Dani Lischinski. 2006. Recursive Wang tiles for real-time blue noise. Vol. 25. 509--518."},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/1186822.1073263"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/882262.882264"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/1095878.1095888"},{"key":"e_1_2_1_26_1","doi-asserted-by":"crossref","unstructured":"Chuan Li and Michael Wand. 2016. Precomputed real-time texture synthesis with markovian generative adversarial networks. In ECCV. 702--716.","DOI":"10.1007\/978-3-319-46487-9_43"},{"key":"e_1_2_1_27_1","unstructured":"Yijun Li Chen Fang Jimei Yang Zhaowen Wang Xin Lu and Ming-Hsuan Yang. 2017. Diversified texture synthesis with feed-forward networks. In CVPR. 3920--3928."},{"key":"e_1_2_1_28_1","volume-title":"Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick.","author":"Liu Ruoshi","year":"2023","unstructured":"Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. 2023. Zero-1-to-3: Zero-shot one image to 3d object. In ICCV. 9298--9309."},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.14610"},{"key":"e_1_2_1_30_1","volume-title":"Open-edit: Open-domain image manipulation with open-vocabulary instructions. In ECCV. 89--106.","author":"Liu Xihui","year":"2020","unstructured":"Xihui Liu, Zhe Lin, Jianming Zhang, Handong Zhao, Quan Tran, Xiaogang Wang, and Hongsheng Li. 2020. Open-edit: Open-domain image manipulation with open-vocabulary instructions. In ECCV. 89--106."},{"key":"e_1_2_1_31_1","first-page":"14081","article-title":"Neural ffts for universal texture image synthesis","volume":"33","author":"Mardani Morteza","year":"2020","unstructured":"Morteza Mardani, Guilin Liu, Aysegul Dundar, Shiqiu Liu, Andrew Tao, and Bryan Catanzaro. 2020. Neural ffts for universal texture image synthesis. NeurIPS 33 (2020), 14081--14092.","journal-title":"NeurIPS"},{"key":"e_1_2_1_32_1","doi-asserted-by":"crossref","unstructured":"Ron Mokady Amir Hertz Kfir Aberman Yael Pritch and Daniel Cohen-Or. 2023. Null-text inversion for editing real images using guided diffusion models. In CVPR. 6038--6047.","DOI":"10.1109\/CVPR52729.2023.00585"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.13117"},{"volume-title":"Generating \u03c9-tile set for texture synthesis","author":"Ng T-Y","key":"e_1_2_1_34_1","unstructured":"T-Y Ng, T-S Tan, and Xinyu Zhang. 2005. Generating \u03c9-tile set for texture synthesis. In IEEE Int. Comp. Graph. 177--184."},{"key":"e_1_2_1_35_1","volume-title":"GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. In ICML. 16784--16804.","author":"Nichol Alexander Quinn","year":"2022","unstructured":"Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. 2022. GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. In ICML. 16784--16804."},{"key":"e_1_2_1_36_1","unstructured":"Yaniv Nikankin Niv Haim and Michal Irani. 2023. SinFusion: training diffusion models on a single image or video. In ICML. 26199--26214."},{"key":"e_1_2_1_37_1","volume-title":"Pytorch: An imperative style, high-performance deep learning library. NeurIPS 32","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. NeurIPS 32 (2019)."},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/325165.325247"},{"key":"e_1_2_1_39_1","first-page":"6994","article-title":"GramGAN: Deep 3d texture synthesis from 2d exemplars","volume":"33","author":"Portenier Tiziano","year":"2020","unstructured":"Tiziano Portenier, Siavash Arjomand Bigdeli, and Orcun Goksel. 2020. GramGAN: Deep 3d texture synthesis from 2d exemplars. NeurIPS 33 (2020), 6994--7004.","journal-title":"NeurIPS"},{"key":"e_1_2_1_40_1","volume-title":"Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv preprint arXiv:2204.06125","author":"Ramesh Aditya","year":"2022","unstructured":"Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv preprint arXiv:2204.06125 (2022)."},{"key":"e_1_2_1_41_1","volume-title":"Texture: Text-guided texturing of 3d shapes. arXiv preprint arXiv:2302.01721","author":"Richardson Elad","year":"2023","unstructured":"Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, and Daniel Cohen-Or. 2023. Texture: Text-guided texturing of 3d shapes. arXiv preprint arXiv:2302.01721 (2023)."},{"key":"e_1_2_1_42_1","doi-asserted-by":"crossref","unstructured":"Carlos Rodriguez-Pardo Dan Casas Elena Garces and Jorge Lopez-Moreno. 2024. TexTile: A Differentiable Metric for Texture Tileability. In CVPR.","DOI":"10.1109\/CVPR52733.2024.00425"},{"key":"e_1_2_1_43_1","volume-title":"Seamlessgan: Self-supervised synthesis of tileable texture maps","author":"Rodriguez-Pardo Carlos","year":"2022","unstructured":"Carlos Rodriguez-Pardo and Elena Garces. 2022. Seamlessgan: Self-supervised synthesis of tileable texture maps. IEEE Trans. Vis. and Comp. Graph. (2022)."},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cag.2019.06.010"},{"key":"e_1_2_1_45_1","doi-asserted-by":"crossref","unstructured":"Robin Rombach Andreas Blattmann Dominik Lorenz Patrick Esser and Bj\u00f6rn Ommer. 2022. High-Resolution Image Synthesis With Latent Diffusion Models. In CVPR. 10684--10695.","DOI":"10.1109\/CVPR52688.2022.01042"},{"key":"e_1_2_1_46_1","first-page":"36479","article-title":"Photorealistic text-to-image diffusion models with deep language understanding","volume":"35","author":"Saharia Chitwan","year":"2022","unstructured":"Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. NeurIPS 35 (2022), 36479--36494.","journal-title":"NeurIPS"},{"key":"e_1_2_1_47_1","volume-title":"Improved techniques for training gans. NeurIPS 29","author":"Salimans Tim","year":"2016","unstructured":"Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved techniques for training gans. NeurIPS 29 (2016)."},{"key":"e_1_2_1_48_1","volume-title":"MatFusion: A Generative Diffusion Model for SVBRDF Capture. In SIGGRAPH Asia 2023 Conference Papers. 1--10","author":"Sartor Sam","year":"2023","unstructured":"Sam Sartor and Pieter Peers. 2023. MatFusion: A Generative Diffusion Model for SVBRDF Capture. In SIGGRAPH Asia 2023 Conference Papers. 1--10."},{"key":"e_1_2_1_49_1","volume-title":"Singan: Learning a generative model from a single natural image. In CVPR. 4570--4580.","author":"Shaham Tamar Rott","year":"2019","unstructured":"Tamar Rott Shaham, Tali Dekel, and Tomer Michaeli. 2019. Singan: Learning a generative model from a single natural image. In CVPR. 4570--4580."},{"key":"e_1_2_1_50_1","unstructured":"Yang Song Jascha Sohl-Dickstein Diederik P Kingma Abhishek Kumar Stefano Ermon and Ben Poole. 2021. Score-Based Generative Modeling through Stochastic Differential Equations. In ICLR."},{"key":"e_1_2_1_51_1","unstructured":"Stability AI. 2022a. Stable Diffusion V2 - Inpainting. https:\/\/huggingface.co\/stabilityai\/stable-diffusion-2-inpainting."},{"key":"e_1_2_1_52_1","unstructured":"Stability AI. 2022b. Stable Diffusion XL. https:\/\/huggingface.co\/docs\/diffusers\/using-diffusers\/sdxl."},{"key":"e_1_2_1_53_1","doi-asserted-by":"crossref","unstructured":"Narek Tumanyan Michal Geyer Shai Bagon and Tali Dekel. 2023. Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation. In CVPR. 1921--1930.","DOI":"10.1109\/CVPR52729.2023.00191"},{"key":"e_1_2_1_54_1","unstructured":"Dmitry Ulyanov Vadim Lebedev Andrea Vedaldi and Victor Lempitsky. 2016. Texture networks: feed-forward synthesis of textures and stylized images. In ICML. 1349--1357."},{"key":"e_1_2_1_55_1","doi-asserted-by":"crossref","unstructured":"Dmitry Ulyanov Andrea Vedaldi and Victor Lempitsky. 2017. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In CVRP. 6924--6932.","DOI":"10.1109\/CVPR.2017.437"},{"key":"e_1_2_1_56_1","article-title":"On-the-fly multi-scale infinite texturing from example","volume":"32","author":"Vanhoey Kenneth","year":"2013","unstructured":"Kenneth Vanhoey, Basile Sauvage, Fr\u00e9d\u00e9ric Larue, and Jean-Michel Dischler. 2013. On-the-fly multi-scale infinite texturing from example. ACM Trans. Graph. 32, 6 (2013).","journal-title":"ACM Trans. Graph."},{"key":"e_1_2_1_57_1","volume-title":"ControlMat: A Controlled Generative Approach to Material Capture. arXiv preprint arXiv:2309.01700","author":"Vecchio Giuseppe","year":"2023","unstructured":"Giuseppe Vecchio, Rosalie Martin, Arthur Roullier, Adrien Kaiser, Romain Rouffet, Valentin Deschaintre, and Tamy Boubekeur. 2023. ControlMat: A Controlled Generative Approach to Material Capture. arXiv preprint arXiv:2309.01700 (2023)."},{"key":"e_1_2_1_58_1","volume-title":"Sketch-Guided Text-to-Image Diffusion Models. In ACM SIGGRAPH 2023 Conference Proceedings. Article 55","author":"Voynov Andrey","year":"2023","unstructured":"Andrey Voynov, Kfir Aberman, and Daniel Cohen-Or. 2023. Sketch-Guided Text-to-Image Diffusion Models. In ACM SIGGRAPH 2023 Conference Proceedings. Article 55, 11 pages."},{"key":"e_1_2_1_59_1","volume-title":"Diffusion Image Analogies. In ACM SIGGRAPH 2023 Conference Proceedings. Article 79","author":"\u0160ubrtov\u00e1 Ad\u00e9la","year":"2023","unstructured":"Ad\u00e9la \u0160ubrtov\u00e1, Michal Luk\u00e1\u010d, Jan \u010cech, David Futschik, Eli Shechtman, and Daniel S\u00fdkora. 2023. Diffusion Image Analogies. In ACM SIGGRAPH 2023 Conference Proceedings. Article 79, 10 pages."},{"key":"e_1_2_1_60_1","volume-title":"Proving theorems by pattern recognition---II. Bell system technical journal 40, 1","author":"Wang Hao","year":"1961","unstructured":"Hao Wang. 1961. Proving theorems by pattern recognition---II. Bell system technical journal 40, 1 (1961), 1--41."},{"key":"e_1_2_1_61_1","unstructured":"Jianyi Wang Kelvin C.K. Chan and Chen Change Loy. 2023. Exploring CLIP for assessing the look and feel of images. In AAAI. Article 284 9 pages."},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1145\/1058129.1058138"},{"key":"e_1_2_1_63_1","volume-title":"3D-aware Image Generation using 2D Diffusion Models. arXiv preprint arXiv:2303.17905","author":"Xiang Jianfeng","year":"2023","unstructured":"Jianfeng Xiang, Jiaolong Yang, Binbin Huang, and Xin Tong. 2023. 3D-aware Image Generation using 2D Diffusion Models. arXiv preprint arXiv:2303.17905 (2023)."},{"key":"e_1_2_1_64_1","volume-title":"Matlaber: Material-aware text-to-3d via latent brdf auto-encoder. arXiv preprint arXiv:2308.09278","author":"Xu Xudong","year":"2023","unstructured":"Xudong Xu, Zhaoyang Lyu, Xingang Pan, and Bo Dai. 2023. Matlaber: Material-aware text-to-3d via latent brdf auto-encoder. arXiv preprint arXiv:2308.09278 (2023)."},{"key":"e_1_2_1_65_1","volume-title":"IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models. arXiv preprint arXiv:2308.06721","author":"Ye Hu","year":"2023","unstructured":"Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang. 2023. IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models. arXiv preprint arXiv:2308.06721 (2023)."},{"key":"e_1_2_1_66_1","unstructured":"Ning Yu Connelly Barnes Eli Shechtman Sohrab Amirghodsi and Michal Lukac. 2019. Texture mixer: A network for controllable synthesis and interpolation of texture. In CVPR. 12164--12173."},{"key":"e_1_2_1_67_1","volume-title":"Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models. arXiv preprint arXiv:2312.13913","author":"Zeng Xianfang","year":"2023","unstructured":"Xianfang Zeng, Xin Chen, Zhongqi Qi, Wen Liu, Zibo Zhao, Zhibin Wang, Bin Fu, Yong Liu, and Gang Yu. 2023. Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models. arXiv preprint arXiv:2312.13913 (2023)."},{"key":"e_1_2_1_68_1","doi-asserted-by":"crossref","unstructured":"Lvmin Zhang Anyi Rao and Maneesh Agrawala. 2023. Adding conditional control to text-to-image diffusion models. In CVPR. 3836--3847.","DOI":"10.1109\/ICCV51070.2023.00355"},{"key":"e_1_2_1_69_1","volume-title":"Controllable Material Generation and Capture. In SIGGRAPH Asia 2022 Conference Papers. Article 34","author":"Zhou Xilong","year":"2022","unstructured":"Xilong Zhou, Milos Hasan, Valentin Deschaintre, Paul Guerrero, Kalyan Sunkavalli, and Nima Khademi Kalantari. 2022. TileGen: Tileable, Controllable Material Generation and Capture. In SIGGRAPH Asia 2022 Conference Papers. Article 34, 9 pages."},{"key":"e_1_2_1_70_1","doi-asserted-by":"crossref","unstructured":"Yang Zhou Kaijian Chen Rongjun Xiao and Hui Huang. 2023. Neural Texture Synthesis With Guided Correspondence. In CVPR. 18095--18104.","DOI":"10.1109\/CVPR52729.2023.01735"},{"key":"e_1_2_1_71_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3197517.3201285","article-title":"Non-stationary texture synthesis by adversarial expansion","volume":"37","author":"Zhou Yang","year":"2018","unstructured":"Yang Zhou, Zhen Zhu, Xiang Bai, Dani Lischinski, Daniel Cohen-Or, and Hui Huang. 2018. Non-stationary texture synthesis by adversarial expansion. ACM Trans. Graph. 37, 4 (2018), 1--13.","journal-title":"ACM Trans. Graph."}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3687981","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3687981","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:09:58Z","timestamp":1750295398000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3687981"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,11,19]]},"references-count":71,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2024,12,19]]}},"alternative-id":["10.1145\/3687981"],"URL":"https:\/\/doi.org\/10.1145\/3687981","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"type":"print","value":"0730-0301"},{"type":"electronic","value":"1557-7368"}],"subject":[],"published":{"date-parts":[[2024,11,19]]},"assertion":[{"value":"2024-11-19","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}