{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T21:16:18Z","timestamp":1764969378807,"version":"3.46.0"},"reference-count":74,"publisher":"Association for Computing Machinery (ACM)","issue":"6","funder":[{"DOI":"10.13039\/501100004377","name":"Hong Kong Polytechnic University","doi-asserted-by":"publisher","award":["P0048387","P0044520","P0050657","P0049586"],"award-info":[{"award-number":["P0048387","P0044520","P0050657","P0049586"]}],"id":[{"id":"10.13039\/501100004377","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62476192"],"award-info":[{"award-number":["62476192"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100006606","name":"Natural Science Foundation of Tianjin City","doi-asserted-by":"publisher","award":["23JCQNJC02010"],"award-info":[{"award-number":["23JCQNJC02010"]}],"id":[{"id":"10.13039\/501100006606","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2025,12]]},"abstract":"<jats:p>\n                    Multi-domain image inpainting utilizes complementary contextual information from auxiliary domain images to restore corrupted regions. While existing methods reconstruct auxiliary images to provide additional guidance, they face fundamental limitations: recovered pixels with complex patterns often lack representative details, while oversimplified patterns offer insufficient contextual information. To address these challenges, we propose HRC-Net, a novel framework incorporating three generative sub-networks for the comprehensive image inpainting task. Our architecture consists of: (1) A\n                    <jats:italic toggle=\"yes\">Hypothesis Sub-network<\/jats:italic>\n                    that enables robust samplings of pixel-wise hypotheses from multi-domain inputs; (2) A\n                    <jats:italic toggle=\"yes\">Representative Sub-network<\/jats:italic>\n                    that learns to score hypothesis quality based on contextual relevance; and (3) a\n                    <jats:italic toggle=\"yes\">Collaboration Sub-network<\/jats:italic>\n                    that optimizes adaptive fusion kernels to integrate the most pertinent details. Together, these components model the joint distribution of representative scores and convolutional kernels, fostering a precise interaction between auxiliary hypotheses and target image corruption to meticulously repair the target image. Extensive evaluations across multiple benchmark datasets demonstrate HRC-Net's superior performance, significantly outperforming state-of-the-art methods in both quantitative metrics and visual quality.\n                  <\/jats:p>","DOI":"10.1145\/3763337","type":"journal-article","created":{"date-parts":[[2025,12,4]],"date-time":"2025-12-04T17:15:39Z","timestamp":1764868539000},"page":"1-13","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["HRC-Net: Learning Visual Hypothesis, Representative, and Collaboration for Multi-Domain Image Inpainting"],"prefix":"10.1145","volume":"44","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7977-6586","authenticated-orcid":false,"given":"Xin","family":"Wang","sequence":"first","affiliation":[{"name":"Department of Computing, The Hong Kong Polytechnic University, Kowloon, Hong Kong"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9324-800X","authenticated-orcid":false,"given":"Di","family":"Lin","sequence":"additional","affiliation":[{"name":"College of Intelligence and Computing, Tianjin University, Tianjin, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7498-3033","authenticated-orcid":false,"given":"Wanchao","family":"Su","sequence":"additional","affiliation":[{"name":"Department of Human Centred Computing, Monash University, Melbourne, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-9388-8146","authenticated-orcid":false,"given":"Ji","family":"Du","sequence":"additional","affiliation":[{"name":"Department of Computing, The Hong Kong Polytechnic University, Kowloon, Hong Kong"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2141-6297","authenticated-orcid":false,"given":"Renjie","family":"Zhang","sequence":"additional","affiliation":[{"name":"Department of Computing, The Hong Kong Polytechnic University, Kowloon, Hong Kong"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8219-5590","authenticated-orcid":false,"given":"Jie","family":"Zhang","sequence":"additional","affiliation":[{"name":"Faculty of Applied Sciences, Macao Polytechnic University, Macau, Macao"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-7137-6955","authenticated-orcid":false,"given":"Haotian","family":"Dong","sequence":"additional","affiliation":[{"name":"College of Intelligence and Computing, Tianjin University, Tianjin, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5855-3810","authenticated-orcid":false,"given":"Ke","family":"Xu","sequence":"additional","affiliation":[{"name":"Department of Computer Science, City University of Hong Kong, Hong Kong, Hong Kong"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0974-9299","authenticated-orcid":false,"given":"Qing","family":"Guo","sequence":"additional","affiliation":[{"name":"College of Computer Science, Nankai University, Tianjin, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1503-0240","authenticated-orcid":false,"given":"Ping","family":"Li","sequence":"additional","affiliation":[{"name":"Department of Computing, The Hong Kong Polytechnic University, Kowloon, Hong Kong"}]}],"member":"320","published-online":{"date-parts":[[2025,12,4]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"crossref","unstructured":"Chenjie Cao Qiaole Dong and Yanwei Fu. 2022. Learning prior feature and attention enhanced image inpainting. In ECCV. 306\u2013322.","DOI":"10.1007\/978-3-031-19784-0_18"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2023.3280222"},{"key":"e_1_2_2_3_1","doi-asserted-by":"crossref","unstructured":"Haiwei Chen and Yajie Zhao. 2024. Don't Look into the Dark: Latent Codes for Pluralistic Image Inpainting. In CVPR. 7591\u20137600.","DOI":"10.1109\/CVPR52733.2024.00725"},{"key":"e_1_2_2_4_1","doi-asserted-by":"crossref","unstructured":"Qifeng Chen and Vladlen Koltun. 2017. Photographic image synthesis with cascaded refinement networks. In ICCV.","DOI":"10.1109\/ICCV.2017.168"},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3731189"},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2018.05.025"},{"key":"e_1_2_2_7_1","unstructured":"Haotian Dong Xin Wang Di Lin Yipeng Wu Qin Chen Ruonan Liu Kairui Yang Ping Li and Qing Guo. 2025. NoiseController: Towards Consistent Multi-view Video Generation via Noise Decomposition and Collaboration. In ICCV."},{"key":"e_1_2_2_8_1","volume-title":"Lau","author":"Dong Zheng","year":"2022","unstructured":"Zheng Dong, Ke Xu, Ziheng Duan, Hujun Bao, Weiwei Xu, and Rynson W.H. Lau. 2022. Geometry-aware two-scale PIFu representation for human reconstruction. In NeurIPS."},{"key":"e_1_2_2_9_1","volume-title":"Lau","author":"Dong Zheng","year":"2024","unstructured":"Zheng Dong, Ke Xu, Yaoan Gao, Hujun Bao, Weiwei Xu, and Rynson W. H. Lau. 2024. Gaussian Surfel Splatting for Live Human Performance Capture. ACM TOG 43, 6 (2024)."},{"key":"e_1_2_2_10_1","volume-title":"Lau","author":"Dong Zheng","year":"2023","unstructured":"Zheng Dong, Ke Xu, Yaoan Gao, Qilin Sun, Hujun Bao, Weiwei Xu, and Rynson W. H. Lau. 2023. SAILOR: Synergizing Radiance and Occupancy Fields for Live Human Performance Capture. ACM TOG 42, 6 (2023)."},{"key":"e_1_2_2_11_1","volume-title":"Metameric inpainting for image warping. 29, 12","author":"Dos Anjos Rafael Kuffner","year":"2022","unstructured":"Rafael Kuffner Dos Anjos, David Walton, Kaan Ak\u015fit, Sebastian Friston, David Swapp, Anthony Steed, and Tobias Ritschel. 2022. Metameric inpainting for image warping. 29, 12 (2022), 5511\u20135522."},{"key":"e_1_2_2_12_1","volume-title":"Cross-Image Context for Single Image Inpainting. NeurIPS","author":"Feng Tingliang","year":"2022","unstructured":"Tingliang Feng, Wei Feng, Weiqi Li, and Di Lin. 2022. Cross-Image Context for Single Image Inpainting. NeurIPS (2022)."},{"key":"e_1_2_2_13_1","volume-title":"Generative adversarial nets. NeurIPS","author":"Goodfellow Ian","year":"2014","unstructured":"Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. NeurIPS (2014)."},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3474085.3475170"},{"key":"e_1_2_2_15_1","unstructured":"Kaiming He Xinlei Chen Saining Xie Yanghao Li Piotr Doll\u00e1r and Ross Girshick. 2022. Masked autoencoders are scalable vision learners. In CVPR. 16000\u201316009."},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601205"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073659"},{"key":"e_1_2_2_18_1","doi-asserted-by":"crossref","unstructured":"Tero Karras Samuli Laine and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In CVPR.","DOI":"10.1109\/CVPR.2019.00453"},{"key":"e_1_2_2_19_1","volume-title":"Zoom-to-inpaint: Image inpainting with high-frequency details. In CVPR.","author":"Kim Soo Ye","year":"2022","unstructured":"Soo Ye Kim, Kfir Aberman, Nori Kanazawa, Rahul Garg, Neal Wadhwa, Huiwen Chang, Nikhil Karnad, Munchurl Kim, and Orly Liba. 2022. Zoom-to-inpaint: Image inpainting with high-frequency details. In CVPR."},{"key":"e_1_2_2_20_1","doi-asserted-by":"crossref","unstructured":"Alexander Kirillov Eric Mintun Nikhila Ravi Hanzi Mao Chloe Rolland Laura Gustafson Tete Xiao Spencer Whitehead Alexander C Berg Wan-Yen Lo et al. 2023. Segment anything. In ICCV. 4015\u20134026.","DOI":"10.1109\/ICCV51070.2023.00371"},{"key":"e_1_2_2_21_1","unstructured":"Keunsoo Ko and Chang-Su Kim. 2023. Continuously masked transformer for image inpainting. In ICCV."},{"key":"e_1_2_2_22_1","unstructured":"Jingyuan Li Ning Wang Lefei Zhang Bo Du and Dacheng Tao. 2020. Recurrent feature reasoning for image inpainting. In CVPR."},{"key":"e_1_2_2_23_1","volume-title":"MAT: Mask-Aware Transformer for Large Hole Image Inpainting. In CVPR.","author":"Li Wenbo","year":"2022","unstructured":"Wenbo Li, Zhe Lin, Kun Zhou, Lu Qi, Yi Wang, and Jiaya Jia. 2022c. MAT: Mask-Aware Transformer for Large Hole Image Inpainting. In CVPR."},{"key":"e_1_2_2_24_1","unstructured":"Wenbo Li Xin Yu Kun Zhou Yibing Song and Zhe Lin. 2024. Image Inpainting via Iteratively Decoupled Probabilistic Modeling. In ICLR."},{"key":"e_1_2_2_25_1","volume-title":"MISF: Multi-Level Interactive Siamese Filtering for High-Fidelity Image Inpainting. In CVPR.","author":"Li Xiaoguang","year":"2022","unstructured":"Xiaoguang Li, Qing Guo, Di Lin, Ping Li, Wei Feng, and Song Wang. 2022a. MISF: Multi-Level Interactive Siamese Filtering for High-Fidelity Image Inpainting. In CVPR."},{"key":"e_1_2_2_26_1","volume-title":"MISF: Multi-level interactive Siamese filtering for high-fidelity image inpainting. In CVPR.","author":"Li Xiaoguang","year":"2022","unstructured":"Xiaoguang Li, Qing Guo, Di Lin, Ping Li, Wei Feng, and Song Wang. 2022b. MISF: Multi-level interactive Siamese filtering for high-fidelity image inpainting. In CVPR."},{"key":"e_1_2_2_27_1","doi-asserted-by":"crossref","unstructured":"Liang Liao Jing Xiao Zheng Wang Chia-Wen Lin and Shin'ichi Satoh. 2020. Guidance and evaluation: Semantic-aware image inpainting for mixed scenes. In ECCV.","DOI":"10.1007\/978-3-030-58583-9_41"},{"key":"e_1_2_2_28_1","doi-asserted-by":"crossref","unstructured":"Liang Liao Jing Xiao Zheng Wang Chia-Wen Lin and Shin'ichi Satoh. 2021. Image inpainting guided by coherence priors of semantics and textures. In CVPR.","DOI":"10.1109\/CVPR46437.2021.00647"},{"key":"e_1_2_2_29_1","volume-title":"Gerhard Petrus Hancke, and Rynson W. H. Lau","author":"Liu Fang","year":"2025","unstructured":"Fang Liu, Yuhao Liu, Ke Xu, Shuquan Ye, Gerhard Petrus Hancke, and Rynson W. H. Lau. 2025. Language-Guided Salient Object Ranking. In CVPR. 29803\u201329813."},{"key":"e_1_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2022.3209702"},{"key":"e_1_2_2_31_1","unstructured":"Guilin Liu Fitsum A Reda Kevin J Shih Ting-Chun Wang Andrew Tao and Bryan Catanzaro. 2018. Image inpainting for irregular holes using partial convolutions. In ECCV."},{"key":"e_1_2_2_32_1","unstructured":"Hongyu Liu Bin Jiang Yibing Song Wei Huang and Chao Yang. 2020. Rethinking image inpainting via a mutual encoder-decoder with feature equalizations. In ECCV."},{"key":"e_1_2_2_33_1","volume-title":"Pd-gan: Probabilistic diverse gan for image inpainting. In CVPR.","author":"Liu Hongyu","year":"2021","unstructured":"Hongyu Liu, Ziyu Wan, Wei Huang, Yibing Song, Xintong Han, and Jing Liao. 2021. Pd-gan: Probabilistic diverse gan for image inpainting. In CVPR."},{"key":"e_1_2_2_34_1","volume-title":"Structure Matters: Tackling the Semantic Discrepancy in Diffusion Models for Image Inpainting. In CVPR. 8038\u20138047.","author":"Liu Haipeng","year":"2024","unstructured":"Haipeng Liu, Yang Wang, Biao Qian, Meng Wang, and Yong Rui. 2024b. Structure Matters: Tackling the Semantic Discrepancy in Diffusion Models for Image Inpainting. In CVPR. 8038\u20138047."},{"key":"e_1_2_2_35_1","unstructured":"Qiankun Liu Zhentao Tan Dongdong Chen Qi Chu Xiyang Dai Yinpeng Chen Mengchen Liu Lu Yuan and Nenghai Yu. 2022b. Reduce information loss in transformers for pluralistic image inpainting. In CVPR."},{"key":"e_1_2_2_36_1","volume-title":"Lau","author":"Liu Yuhao","year":"2024","unstructured":"Yuhao Liu, Zhanghan Ke, Ke Xu, Fang Liu, Zhenwei Wang, and Rynson W.H. Lau. 2024a. Recasting regional lighting for shadow removal. In AAAI."},{"key":"e_1_2_2_37_1","doi-asserted-by":"crossref","unstructured":"Ziwei Liu Ping Luo Xiaogang Wang and Xiaoou Tang. 2015. Deep learning face attributes in the wild. In ICCV.","DOI":"10.1109\/ICCV.2015.425"},{"key":"e_1_2_2_38_1","first-page":"5130","article-title":"FACEMUG","volume":"31","author":"Lu Wanglong","year":"2025","unstructured":"Wanglong Lu, Jikai Wang, Xiaogang Jin, Xianta Jiang, and Hanli Zhao. 2025. FACEMUG: A Multimodal Generative and Fusion Framework for Local Facial Editing. 31, 9 (2025), 5130\u20135145.","journal-title":"A Multimodal Generative and Fusion Framework for Local Facial Editing."},{"key":"e_1_2_2_39_1","doi-asserted-by":"crossref","unstructured":"Andreas Lugmayr Martin Danelljan Andres Romero Fisher Yu Radu Timofte and Luc Van Gool. 2022. RePaint: Inpainting Using Denoising Diffusion Probabilistic Models. In CVPR.","DOI":"10.1109\/CVPR52688.2022.01117"},{"key":"e_1_2_2_40_1","volume-title":"Spin-nerf: Multiview segmentation and perceptual inpainting with neural radiance fields. In CVPR. 20669\u201320679.","author":"Mirzaei Ashkan","year":"2023","unstructured":"Ashkan Mirzaei, Tristan Aumentado-Armstrong, Konstantinos G Derpanis, Jonathan Kelly, Marcus A Brubaker, Igor Gilitschenski, and Alex Levinshtein. 2023. Spin-nerf: Multiview segmentation and perceptual inpainting with neural radiance fields. In CVPR. 20669\u201320679."},{"key":"e_1_2_2_41_1","volume-title":"Exemplar-based inpainting for 6dof virtual reality photos. 29, 11","author":"Mori Shohei","year":"2023","unstructured":"Shohei Mori, Dieter Schmalstieg, and Denis Kalkofen. 2023. Exemplar-based inpainting for 6dof virtual reality photos. 29, 11 (2023), 4644\u20134654."},{"key":"e_1_2_2_42_1","volume-title":"Edgeconnect: Generative image inpainting with adversarial edge learning. arXiv preprint arXiv:1901.00212","author":"Nazeri Kamyar","year":"2019","unstructured":"Kamyar Nazeri, Eric Ng, Tony Joseph, Faisal Z Qureshi, and Mehran Ebrahimi. 2019. Edgeconnect: Generative image inpainting with adversarial edge learning. arXiv preprint arXiv:1901.00212 (2019)."},{"key":"e_1_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-023-01977-6"},{"key":"e_1_2_2_44_1","volume-title":"Structureflow: Image inpainting via structure-aware appearance flow. In ICCV.","author":"Ren Yurui","year":"2019","unstructured":"Yurui Ren, Xiaoming Yu, Ruonan Zhang, Thomas H Li, Shan Liu, and Ge Li. 2019. Structureflow: Image inpainting via structure-aware appearance flow. In ICCV."},{"key":"e_1_2_2_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW56347.2022.00124"},{"key":"e_1_2_2_46_1","volume-title":"Mi-gan: A simple baseline for image inpainting on mobile devices. In ICCV.","author":"Sargsyan Andranik","year":"2023","unstructured":"Andranik Sargsyan, Shant Navasardyan, Xingqian Xu, and Humphrey Shi. 2023. Mi-gan: A simple baseline for image inpainting on mobile devices. In ICCV."},{"key":"e_1_2_2_47_1","doi-asserted-by":"crossref","unstructured":"Nathan Silberman Derek Hoiem Pushmeet Kohli and Rob Fergus. 2012. Indoor segmentation and support inference from rgbd images. In ECCV.","DOI":"10.1007\/978-3-642-33715-4_54"},{"key":"e_1_2_2_48_1","doi-asserted-by":"publisher","DOI":"10.1145\/3132703"},{"key":"e_1_2_2_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925972"},{"key":"e_1_2_2_50_1","doi-asserted-by":"crossref","unstructured":"Sridhar Sola and Darshan Gera. 2023. Unmasking Your Expression: Expression-Conditioned GAN for Masked Face Inpainting. In CVPR.","DOI":"10.1109\/CVPRW59228.2023.00628"},{"key":"e_1_2_2_51_1","volume-title":"Spg-net: Segmentation prediction and guidance network for image inpainting. arXiv preprint arXiv:1805.03356","author":"Song Yuhang","year":"2018","unstructured":"Yuhang Song, Chao Yang, Yeji Shen, Peng Wang, Qin Huang, and C-C Jay Kuo. 2018. Spg-net: Segmentation prediction and guidance network for image inpainting. arXiv preprint arXiv:1805.03356 (2018)."},{"key":"e_1_2_2_52_1","volume-title":"DrawingInStyles: Portrait image generation and editing with spatially conditioned StyleGAN. 29, 10","author":"Su Wanchao","year":"2022","unstructured":"Wanchao Su, Hui Ye, Shu-Yu Chen, Lin Gao, and Hongbo Fu. 2022. DrawingInStyles: Portrait image generation and editing with spatially conditioned StyleGAN. 29, 10 (2022), 4074\u20134088."},{"key":"e_1_2_2_53_1","doi-asserted-by":"crossref","unstructured":"Jian Sun Lu Yuan Jiaya Jia and Heung-Yeung Shum. 2005. Image completion with structure propagation. In SIGGRAPH. 861\u2013868.","DOI":"10.1145\/1186822.1073274"},{"key":"e_1_2_2_54_1","doi-asserted-by":"crossref","unstructured":"Wentao Wang Li Niu Jianfu Zhang Xue Yang and Liqing Zhang. 2022. Dual-path image inpainting with auxiliary gan inversion. In CVPR.","DOI":"10.1109\/CVPR52688.2022.01113"},{"key":"e_1_2_2_55_1","doi-asserted-by":"crossref","unstructured":"Xintao Wang Ke Yu Chao Dong and Chen Change Loy. 2018. Recovering realistic texture in image super-resolution by deep spatial feature transform. In CVPR.","DOI":"10.1109\/CVPR.2018.00070"},{"key":"e_1_2_2_56_1","doi-asserted-by":"crossref","unstructured":"Yikai Wang Chenjie Cao Junqiu Yu Ke Fan Xiangyang Xue and Yanwei Fu. 2025. Towards Enhanced Image Inpainting: Mitigating Unwanted Object Insertion and Preserving Color Consistency. In CVPR. 23237\u201323248.","DOI":"10.1109\/CVPR52734.2025.02164"},{"key":"e_1_2_2_57_1","volume-title":"Lau","author":"Warren Alex","year":"2024","unstructured":"Alex Warren, Ke Xu, Jiaying Lin, Gary K.L. Tam, and Rynson W.H. Lau. 2024. Effective Video Mirror Detection with Inconsistent Motion Cues. In CVPR. 17244\u201317252."},{"key":"e_1_2_2_58_1","volume-title":"SmartBrush: Text and Shape Guided Object Inpainting with Diffusion Model. arXiv preprint arXiv:2212.05034","author":"Xie Shaoan","year":"2022","unstructured":"Shaoan Xie, Zhifei Zhang, Zhe Lin, Tobias Hinz, and Kun Zhang. 2022. SmartBrush: Text and Shape Guided Object Inpainting with Diffusion Model. arXiv preprint arXiv:2212.05034 (2022)."},{"key":"e_1_2_2_59_1","volume-title":"Gerhard Petrus Hancke, and Rynson W. H. Lau","author":"Xu Ke","year":"2023","unstructured":"Ke Xu, Gerhard Petrus Hancke, and Rynson W. H. Lau. 2023. Learning Image Harmonization in the Linear Color Space. In ICCV. 12536\u201312545."},{"key":"e_1_2_2_60_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2021.3122930"},{"key":"e_1_2_2_61_1","doi-asserted-by":"crossref","unstructured":"Lihe Yang Bingyi Kang Zilong Huang Xiaogang Xu Jiashi Feng and Hengshuang Zhao. 2024. Depth anything: Unleashing the power of large-scale unlabeled data. In CVPR. 10371\u201310381.","DOI":"10.1109\/CVPR52733.2024.00987"},{"key":"e_1_2_2_62_1","volume-title":"Alexander G Schwing, Mark Hasegawa-Johnson, and Minh N Do.","author":"Yeh Raymond A","year":"2017","unstructured":"Raymond A Yeh, Chen Chen, Teck Yian Lim, Alexander G Schwing, Mark Hasegawa-Johnson, and Minh N Do. 2017. Semantic image inpainting with deep generative models. In CVPR."},{"key":"e_1_2_2_63_1","volume-title":"Bahri Batuhan Bilecen, and Aysegul Dundar","author":"Yildirim Ahmet Burak","year":"2023","unstructured":"Ahmet Burak Yildirim, Hamza Pehlivan, Bahri Batuhan Bilecen, and Aysegul Dundar. 2023. Diverse inpainting and editing with gan inversion. In ICCV."},{"key":"e_1_2_2_64_1","unstructured":"Jiahui Yu Zhe Lin Jimei Yang Xiaohui Shen Xin Lu and Thomas S Huang. 2019. Free-form image inpainting with gated convolution. In ICCV."},{"key":"e_1_2_2_65_1","doi-asserted-by":"crossref","unstructured":"Yanhong Zeng Jianlong Fu Hongyang Chao and Baining Guo. 2019. Learning pyramid-context encoder network for high-quality image inpainting. In CVPR.","DOI":"10.1109\/CVPR.2019.00158"},{"key":"e_1_2_2_66_1","volume-title":"Cr-fill: Generative image inpainting with auxiliary contextual reconstruction. In ICCV.","author":"Zeng Yu","year":"2021","unstructured":"Yu Zeng, Zhe Lin, Huchuan Lu, and Vishal M Patel. 2021. Cr-fill: Generative image inpainting with auxiliary contextual reconstruction. In ICCV."},{"key":"e_1_2_2_67_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2023.3305243"},{"key":"e_1_2_2_68_1","volume-title":"Fully context-aware image inpainting with a learned semantic pyramid. Pattern Recognition","author":"Zhang Wendong","year":"2023","unstructured":"Wendong Zhang, Yunbo Wang, Bingbing Ni, and Xiaokang Yang. 2023. Fully context-aware image inpainting with a learned semantic pyramid. Pattern Recognition (2023)."},{"key":"e_1_2_2_69_1","volume-title":"Context-aware image inpainting with learned semantic priors. arXiv preprint arXiv:2106.07220","author":"Zhang Wendong","year":"2021","unstructured":"Wendong Zhang, Junwei Zhu, Ying Tai, Yunbo Wang, Wenqing Chu, Bingbing Ni, Chengjie Wang, and Xiaokang Yang. 2021. Context-aware image inpainting with learned semantic priors. arXiv preprint arXiv:2106.07220 (2021)."},{"key":"e_1_2_2_70_1","volume-title":"Uctgan: Diverse image inpainting based on unsupervised cross-space translation. In CVPR.","author":"Zhao Lei","year":"2020","unstructured":"Lei Zhao, Qihang Mo, Sihuan Lin, Zhizhong Wang, Zhiwen Zuo, Haibo Chen, Wei Xing, and Dongming Lu. 2020. Uctgan: Diverse image inpainting based on unsupervised cross-space translation. In CVPR."},{"key":"e_1_2_2_71_1","unstructured":"Shengyu Zhao Jonathan Cui Yilun Sheng Yue Dong Xiao Liang Eric I Chang and Yan Xu. 2021. Large scale image completion via co-modulated generative adversarial networks. In ICLR."},{"key":"e_1_2_2_72_1","volume-title":"Pluralistic free-form image completion. IJCV","author":"Zheng Chuanxia","year":"2021","unstructured":"Chuanxia Zheng, Tat-Jen Cham, and Jianfei Cai. 2021. Pluralistic free-form image completion. IJCV (2021)."},{"key":"e_1_2_2_73_1","doi-asserted-by":"crossref","unstructured":"Haitian Zheng Zhe Lin Jingwan Lu Scott Cohen Eli Shechtman Connelly Barnes Jianming Zhang Ning Xu Sohrab Amirghodsi and Jiebo Luo. 2022. Image inpainting with cascaded modulation gan and object-aware training. In ECCV. 277\u2013296.","DOI":"10.1007\/978-3-031-19787-1_16"},{"key":"e_1_2_2_74_1","volume-title":"Places: A 10 million image database for scene recognition","author":"Zhou Bolei","year":"2017","unstructured":"Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. 2017. Places: A 10 million image database for scene recognition. IEEE TPAMI (2017)."}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3763337","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T21:11:35Z","timestamp":1764969095000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3763337"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12]]},"references-count":74,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,12]]}},"alternative-id":["10.1145\/3763337"],"URL":"https:\/\/doi.org\/10.1145\/3763337","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"type":"print","value":"0730-0301"},{"type":"electronic","value":"1557-7368"}],"subject":[],"published":{"date-parts":[[2025,12]]},"assertion":[{"value":"2025-05-24","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-08-09","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-12-04","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}