{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,9]],"date-time":"2026-01-09T19:58:00Z","timestamp":1767988680962,"version":"3.49.0"},"reference-count":47,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2023,5,3]],"date-time":"2023-05-03T00:00:00Z","timestamp":1683072000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,5,3]],"date-time":"2023-05-03T00:00:00Z","timestamp":1683072000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2023,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Inpainting high-resolution images with large holes challenges existing deep learning-based image inpainting methods. We present a novel framework\u2014PyramidFill for high-resolution image inpainting, which explicitly disentangles the task into two sub-tasks: content completion and texture synthesis. PyramidFill attempts to complete the content of unknown regions in a lower-resolution image, and synthesize the textures of unknown regions in a higher-resolution image, progressively. Thus, our model consists of a pyramid of fully convolutional GANs, wherein the content GAN is responsible for completing contents in the lowest-resolution masked image, and each texture GAN is responsible for synthesizing textures in a higher-resolution image. Since completing contents and synthesizing textures demand different abilities from generators, we customize different architectures for the content GAN and texture GAN. Experiments on multiple datasets including CelebA-HQ, Places2 and a new natural scenery dataset (NSHQ) with different resolutions demonstrate that PyramidFill generates higher-quality inpainting results than the state-of-the-art methods.<\/jats:p>","DOI":"10.1007\/s40747-023-01080-w","type":"journal-article","created":{"date-parts":[[2023,5,3]],"date-time":"2023-05-03T11:01:48Z","timestamp":1683111708000},"page":"6297-6306","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":10,"title":["Generator pyramid for high-resolution image inpainting"],"prefix":"10.1007","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0336-9295","authenticated-orcid":false,"given":"Leilei","family":"Cao","sequence":"first","affiliation":[]},{"given":"Tong","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Yixu","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Bo","family":"Yan","sequence":"additional","affiliation":[]},{"given":"Yandong","family":"Guo","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,5,3]]},"reference":[{"key":"1080_CR1","doi-asserted-by":"crossref","unstructured":"Barnes C, Shechtman E, Finkelstein A, Goldman DB (2009) PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans Graph 28(3):24","DOI":"10.1145\/1531326.1531330"},{"key":"1080_CR2","unstructured":"Criminisi A, Perez P, Toyama K (2003) Object removal by exemplar-based inpainting. In: CVPR"},{"key":"1080_CR3","unstructured":"Denton E, Chintala S, Szlam A, Fergus RD (2015) generative image models using a Laplacian pyramid of adversarial networks. In: NIPS"},{"key":"1080_CR4","doi-asserted-by":"crossref","unstructured":"Du W, Hu C, Yang H (2020) Learning invariant representation for unsupervised image restoration. In: CVPR","DOI":"10.1109\/CVPR42600.2020.01449"},{"key":"1080_CR5","unstructured":"Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: NIPS"},{"key":"1080_CR6","doi-asserted-by":"publisher","first-page":"27","DOI":"10.1016\/j.neucom.2015.09.116","volume":"187","author":"Y Guo","year":"2016","unstructured":"Guo Y, Liu Y, Oerlemans A, Lao S, Wu S, Lew MS (2016) Deep learning for visual understanding: a review. Neurocomputing 187:27\u201348","journal-title":"Neurocomputing"},{"key":"1080_CR7","doi-asserted-by":"publisher","first-page":"87","DOI":"10.1145\/1400181.1400202","volume":"51","author":"J Hays","year":"2008","unstructured":"Hays J, Efros AA (2008) Scene completion using millions of photographs. Commun ACM 51:87\u201394","journal-title":"Commun ACM"},{"key":"1080_CR8","doi-asserted-by":"crossref","unstructured":"Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image completion. ACM Trans Graph 36(4):107","DOI":"10.1145\/3072959.3073659"},{"key":"1080_CR9","doi-asserted-by":"crossref","unstructured":"Isola P, Zhu J-Y, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: CVPR","DOI":"10.1109\/CVPR.2017.632"},{"key":"1080_CR10","doi-asserted-by":"crossref","unstructured":"Jo Y, Park J (2019) SC-FEGAN: face editing generative adversarial network with user\u2019s sketch and color. In: ICCV","DOI":"10.1109\/ICCV.2019.00183"},{"key":"1080_CR11","unstructured":"Justin J, Alexandre A, Li F-F (2016) Perceptual losses for real-time style transfer and super-resolution. In: ECCV"},{"key":"1080_CR12","unstructured":"Karras T, Aila T, Laine S, Lehtinen J (2018) Progressive growing of GANs for improved quality, stability, and variation. In: ICLR"},{"issue":"7553","key":"1080_CR13","doi-asserted-by":"publisher","first-page":"436","DOI":"10.1038\/nature14539","volume":"521","author":"Y LeCun","year":"2015","unstructured":"LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436\u2013444","journal-title":"Nature"},{"key":"1080_CR14","doi-asserted-by":"crossref","unstructured":"Ledig C, Theis L, Husz\u00e1r F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z, Shi W (2017) Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR","DOI":"10.1109\/CVPR.2017.19"},{"key":"1080_CR15","doi-asserted-by":"crossref","unstructured":"Li C, Wand M (2016) Precomputed real-time texture synthesis with Markovian generative adversarial networks. In: ECCV","DOI":"10.1007\/978-3-319-46487-9_43"},{"key":"1080_CR16","doi-asserted-by":"crossref","unstructured":"Li J, Wang N, Zhang L, Du B, Tao D (2020) Recurrent feature reasoning for image inpainting. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00778"},{"key":"1080_CR17","doi-asserted-by":"crossref","unstructured":"Li W, Lin Z, Zhou K, Qi L, Wang Y, Jia J (2022) MAT: mask-aware transformer for large hole image inpainting. In CVPR, pp 10748\u201310758","DOI":"10.1109\/CVPR52688.2022.01049"},{"key":"1080_CR18","doi-asserted-by":"crossref","unstructured":"Li X, Guo Q, Lin D, Li P, Feng W, Wang S (2022) MISF: multi-level interactive Siamese filtering for high-fidelity image inpainting. In: CVPR, pp 1859\u20131868","DOI":"10.1109\/CVPR52688.2022.00191"},{"key":"1080_CR19","doi-asserted-by":"crossref","unstructured":"Liao L, Xiao J, Wang Z, Lin C-W, Satoh S (2021) Image inpainting guided by coherence priors of semantics and textures. In: CVPR, pp 6535\u20136544","DOI":"10.1109\/CVPR46437.2021.00647"},{"key":"1080_CR20","doi-asserted-by":"crossref","unstructured":"Liu G, Reda FA, Shih KJ, Wang T-C, Tao A, Catanzaro B (2018) Image inpainting for irregular holes using partial convolutions. In: ECCV","DOI":"10.1007\/978-3-030-01252-6_6"},{"key":"1080_CR21","doi-asserted-by":"crossref","unstructured":"Liu H, Jiang B, Song Y, Huang W, Yang C (2020) Rethinking image inpainting via a mutual encoder\u2013decoder with feature equalizations. In: ECCV","DOI":"10.1007\/978-3-030-58536-5_43"},{"key":"1080_CR22","doi-asserted-by":"crossref","unstructured":"Lugmayr A, Danelljan M, Romero A, Yu F, Timofte R, Van Gool L (2022) RePaint: inpainting using denoising diffusion probabilistic models. In: CVPR, pp 11451\u201311461","DOI":"10.1109\/CVPR52688.2022.01117"},{"key":"1080_CR23","unstructured":"Miyato T, Kataoka T, Koyama M, Yoshida Y (2018) Spectral normalization for generative adversarial networks. In: ICLR"},{"key":"1080_CR24","doi-asserted-by":"crossref","unstructured":"Pan X, Zhan X, Dai B, Lin D, Loy CC, Luo P (2020) Exploiting deep generative prior for versatile image restoration and manipulation. In: ECCV","DOI":"10.1007\/978-3-030-58536-5_16"},{"key":"1080_CR25","doi-asserted-by":"crossref","unstructured":"Pathak D, Kr\u00e4henb\u00fchl P, Donahue J, Darrell T, Efros AA (2016) Context encoders: feature learning by inpainting. In: CVPR","DOI":"10.1109\/CVPR.2016.278"},{"key":"1080_CR26","doi-asserted-by":"crossref","unstructured":"Sch\u00f6nfeld E, Schiele B, Khoreva A (2020) A U-Net based discriminator for generative adversarial networks. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00823"},{"key":"1080_CR27","doi-asserted-by":"crossref","unstructured":"Shaham TR, Dekel T, Michaeli T (2019) SinGAN: learning a generative model from a single natural image. In: ICCV","DOI":"10.1109\/ICCV.2019.00467"},{"key":"1080_CR28","doi-asserted-by":"crossref","unstructured":"Shocher A, Gandelsman Y, Mosseri I, Yarom M, Irani M, Freeman WT, Dekel T (2020) Semantic pyramid for image generation. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00748"},{"key":"1080_CR29","unstructured":"Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: ICLR"},{"key":"1080_CR30","doi-asserted-by":"crossref","unstructured":"Wan Z, Zhang J, Chen D, Liao J (2021) High-fidelity pluralistic image completion with transformers. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00465"},{"key":"1080_CR31","doi-asserted-by":"crossref","unstructured":"Wang X, Yu K, Wu S, Gu J, Liu Y, Dong C, Qiao Y, Change Loy C (2018) ESRGAN: enhanced super-resolution generative adversarial networks. In: ECCVW","DOI":"10.1007\/978-3-030-11021-5_5"},{"issue":"4","key":"1080_CR32","doi-asserted-by":"publisher","first-page":"600","DOI":"10.1109\/TIP.2003.819861","volume":"13","author":"Z Wang","year":"2004","unstructured":"Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600\u2013612","journal-title":"IEEE Trans Image Process"},{"key":"1080_CR33","doi-asserted-by":"crossref","unstructured":"Xiao Z, Zhang H, Tong H, Xu X (2022) An efficient temporal network with dual self-distillation for electroencephalography signal classification. In: BIBM, pp 1759\u20131762","DOI":"10.1109\/BIBM55620.2022.9995049"},{"key":"1080_CR34","first-page":"1","volume":"71","author":"H Xing","year":"2022","unstructured":"Xing H, Xiao Z, Qu R, Zhu Z, Zhao B (2022) An efficient federated distillation learning system for multitask time series classification. IEEE Trans Instrum Meas 71:1\u201312","journal-title":"IEEE Trans Instrum Meas"},{"key":"1080_CR35","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2021.107338","volume":"229","author":"Z Xiao","year":"2021","unstructured":"Xiao Z, Xu X, Xing H, Song F, Wang X, Zhao B (2021) A federated learning system with enhanced feature extraction for human activity recognition. Knowl-Based Syst 229:107338","journal-title":"Knowl-Based Syst"},{"key":"1080_CR36","doi-asserted-by":"crossref","unstructured":"Yang C, Lu X, Lin Z, Shechtman E, Wang O, Li H (2017) High-resolution image inpainting using multi-scale neural patch synthesis. In: CVPR","DOI":"10.1109\/CVPR.2017.434"},{"key":"1080_CR37","doi-asserted-by":"crossref","unstructured":"Yang F, Yang H, Fu J, Lu H, Guo B (2020) Learning texture transformer network for image super-resolution. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00583"},{"key":"1080_CR38","doi-asserted-by":"crossref","unstructured":"Yang J, Qi Z, Shi Y (2020) Learning to incorporate structure knowledge for image inpainting. In: AAAI","DOI":"10.20944\/preprints202002.0125.v1"},{"key":"1080_CR39","doi-asserted-by":"crossref","unstructured":"Yi Z, Tang Q, Azizi S, Jang D, Xu Z (2020) Contextual residual aggregation for ultra high-resolution image inpainting. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00753"},{"key":"1080_CR40","doi-asserted-by":"crossref","unstructured":"Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2018) Generative image inpainting with contextual attention. In: CVPR","DOI":"10.1109\/CVPR.2018.00577"},{"key":"1080_CR41","doi-asserted-by":"crossref","unstructured":"Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2019) Free-form image inpainting with gated convolution. In: CVPR","DOI":"10.1109\/ICCV.2019.00457"},{"key":"1080_CR42","doi-asserted-by":"crossref","unstructured":"Zeng Y, Fu J, Chao H, Guo B (2019) Learning pyramid-context encoder network for high-quality image inpainting. In: CVPR","DOI":"10.1109\/CVPR.2019.00158"},{"key":"1080_CR43","doi-asserted-by":"crossref","unstructured":"Zeng Y, Lin Z, Yang J, Zhang J, Shechtman E, Lu H (2020) High-resolution image inpainting with iterative confidence feedback and guided upsampling. In: ECCV","DOI":"10.1007\/978-3-030-58529-7_1"},{"key":"1080_CR44","doi-asserted-by":"crossref","unstructured":"Zeng Y, Lin Z, Lu H, Patel VM (2021) CR-Fill: generative image inpainting with auxiliary contexutal reconstruction. In: ICCV, pp 14144\u201314153","DOI":"10.1109\/ICCV48922.2021.01390"},{"key":"1080_CR45","doi-asserted-by":"crossref","unstructured":"Zheng C, Cham T-J, Cai J, Phung D (2022) Bridging global context interactions for high-fidelity image completion. In: CVPR, pp 11512\u201311522","DOI":"10.1109\/CVPR52688.2022.01122"},{"key":"1080_CR46","doi-asserted-by":"crossref","unstructured":"Zhou B, Lapedriza A, Khosla A, Oliva A, Torralba A (2017) Places: a 10 million image database for scene recognition. IEEE Trans Pattern Anal Mach Intell 40(6):1452\u20131464","DOI":"10.1109\/TPAMI.2017.2723009"},{"key":"1080_CR47","doi-asserted-by":"publisher","first-page":"4855","DOI":"10.1109\/TIP.2021.3076310","volume":"30","author":"M Zhu","year":"2021","unstructured":"Zhu M, He D, Li X, Li C, Li F, Liu X, Ding E, Zhang Z (2021) Image inpainting by end-to-end cascaded refinement with mask awareness. IEEE Trans Image Process 30:4855\u20134866","journal-title":"IEEE Trans Image Process"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01080-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-023-01080-w\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01080-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,10,27]],"date-time":"2023-10-27T19:13:53Z","timestamp":1698434033000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-023-01080-w"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,5,3]]},"references-count":47,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2023,12]]}},"alternative-id":["1080"],"URL":"https:\/\/doi.org\/10.1007\/s40747-023-01080-w","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,5,3]]},"assertion":[{"value":"29 December 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 April 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 May 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}