{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,3,22]],"date-time":"2025-03-22T12:17:35Z","timestamp":1742645855380,"version":"3.37.3"},"reference-count":41,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2023,7,20]],"date-time":"2023-07-20T00:00:00Z","timestamp":1689811200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,7,20]],"date-time":"2023-07-20T00:00:00Z","timestamp":1689811200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["11872069"],"award-info":[{"award-number":["11872069"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"the Central Government Funds of Guiding Local Scientific and Technological Development for Sichuan Province","award":["2021ZYD0034"],"award-info":[{"award-number":["2021ZYD0034"]}]},{"name":"National Ministry of Education \u201cChunhui Plan\u201d Scientific Research Project","award":["Z2017076"],"award-info":[{"award-number":["Z2017076"]}]},{"DOI":"10.13039\/501100019014","name":"Chengdu Science and Technology Program","doi-asserted-by":"publisher","award":["2016-YF04-00044-JH"],"award-info":[{"award-number":["2016-YF04-00044-JH"]}],"id":[{"id":"10.13039\/501100019014","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,2]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Very recently, with the widespread research of deep learning, its achievements are increasingly evident in image inpainting tasks. However, many existing multi-stage methods fail to effectively inpainting the larger missing areas, their common drawback is that the result of each stage is easily misguided by the wrong content generated in the previous stage. To solve this issue, in this paper, a novel one-stage generative adversarial network based on the progressive decoding architecture and gradient guidance. Firstly, gradient priors are extracted at the encoder stage to be passed to the decoding branch, and multiscale attention fusion group is used to help the network understand the image features. Secondly, multiple parallel decoding branches fill and refine the missing regions by top-down passing the reconstructed priors. This progressively guided repair avoids the detrimental effects of inappropriate priors. The joint guidance of features and gradient priors helps the restoration results contain the correct structure and rich details. And the progressive guidance is achieved by our fusion strategy, combining reimage convolution and design channel coordinate attention to fuse and reweight the features of different branches. Finally, we use the multiscale fusion to merge the feature maps at different scales reconstructed by the last decoding branch and map them to the image space, which further improves the semantic plausibility of the restoration results. Experiments on multiple datasets show that the qualitative and quantitative results of our computationally efficient model are competitive with those of state-of-the-art methods.<\/jats:p>","DOI":"10.1007\/s40747-023-01158-5","type":"journal-article","created":{"date-parts":[[2023,7,20]],"date-time":"2023-07-20T02:01:59Z","timestamp":1689818519000},"page":"289-303","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Image inpainting via progressive decoder and gradient guidance"],"prefix":"10.1007","volume":"10","author":[{"given":"Shuang","family":"Hou","sequence":"first","affiliation":[]},{"given":"Xiucheng","family":"Dong","sequence":"additional","affiliation":[]},{"given":"Chencheng","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Chao","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Hongda","family":"Guo","sequence":"additional","affiliation":[]},{"given":"Fan","family":"Zhang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,7,20]]},"reference":[{"key":"1158_CR1","doi-asserted-by":"crossref","unstructured":"Chang LY, Liu ZY, Hsu W (2019) VORNet: spatio-temporally consistent video inpainting for object removal. In: 2019 IEEE\/CVF conference on computer vision and pattern recognition workshops (CVPRW). IEEE","DOI":"10.1109\/CVPRW.2019.00229"},{"key":"1158_CR2","doi-asserted-by":"crossref","unstructured":"Hertz A, Fogel S, Hanocka R, Giryes R, Cohen-Or D (2019) Blind visual motif removal from a single image. arXiv preprint arXiv:1904.02756","DOI":"10.1109\/CVPR.2019.00702"},{"key":"1158_CR3","doi-asserted-by":"crossref","unstructured":"Nakamura T, Zhu A, Yanai K, Uchida S (2017) Scene text eraser. In: 14th IAPR international conference on document analysis and recognition (ICDAR), pp 832\u2013837","DOI":"10.1109\/ICDAR.2017.141"},{"key":"1158_CR4","doi-asserted-by":"publisher","first-page":"10807","DOI":"10.1007\/s11042-017-5077-z","volume":"77","author":"Q Fan","year":"2018","unstructured":"Fan Q, Zhang L (2018) A novel patch matching algorithm for exemplar-based image inpainting. Multimed Tools Appl 77:10807\u201310821","journal-title":"Multimed Tools Appl"},{"issue":"4","key":"1158_CR5","doi-asserted-by":"publisher","first-page":"3549","DOI":"10.1007\/s13369-018-3592-5","volume":"44","author":"J Zeng","year":"2019","unstructured":"Zeng J, Fu X, Leng L, Wang C (2019) Image inpainting algorithm based on saliency map and gray entropy. Arabian J Sci Eng 44(4):3549\u20133558","journal-title":"Arabian J Sci Eng"},{"key":"1158_CR6","first-page":"1","volume":"22","author":"F Yao","year":"2018","unstructured":"Yao F (2018) Damaged region filling by improved Criminisi image inpainting algorithm for thangka. Clust Comput 22:1\u20139","journal-title":"Clust Comput"},{"key":"1158_CR7","unstructured":"Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Proceedings of the 2014 NeurIPS, pp 2672\u20132680"},{"key":"1158_CR8","doi-asserted-by":"crossref","unstructured":"Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros A (2016) Context encoders: feature learning by inpainting. In: Proceedings of the 2016 CVPR, pp 2536\u20132544","DOI":"10.1109\/CVPR.2016.278"},{"key":"1158_CR9","doi-asserted-by":"publisher","first-page":"3460","DOI":"10.1007\/s10489-020-01971-2","volume":"51","author":"Y Chen","year":"2021","unstructured":"Chen Y, Zhang H, Liu L, Chen X, Zhang Q, Yang K, Xia R, Xie J (2021) Research on image Inpainting algorithm of improved GAN based on two-discriminations networks. Appl Intell 51:3460\u20133474","journal-title":"Appl Intell"},{"key":"1158_CR10","doi-asserted-by":"crossref","unstructured":"Liao L, Xiao J, Wang Z, Lin C-W, Satoh S (2020) Guidance and evaluation: semantic-aware image inpainting for mixed scenes. In: Proceedings of the 2020 ECCV, pp 683\u2013700","DOI":"10.1007\/978-3-030-58583-9_41"},{"issue":"3","key":"1158_CR11","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2020.115929","volume":"87","author":"H Shao","year":"2020","unstructured":"Shao H, Wang Y, Fu Y (2020) Generative image inpainting via edge structure and color aware fusion. Signal Process Image Commun 87(3):115929","journal-title":"Signal Process Image Commun"},{"key":"1158_CR12","unstructured":"Nazeri K, Ng E, Joseph T, Qureshi F, Ebrahimi M (2019) EdgeConnect: generative image inpainting with adversarial edge learning. In: Proceedings of the 2019 ICCVW"},{"key":"1158_CR13","doi-asserted-by":"crossref","unstructured":"Ren Y, Yu X, Zhang R (2019) StructureFlow: image inpainting via structure-aware appearance flow. In: Proceedings of the 2019 ICCV, pp 181\u2013190","DOI":"10.1109\/ICCV.2019.00027"},{"key":"1158_CR14","doi-asserted-by":"crossref","unstructured":"Guo X, Yang H, Huang D (2021) Image inpainting via conditional texture and structure dual generation. In: International conference on computer vision","DOI":"10.1109\/ICCV48922.2021.01387"},{"key":"1158_CR15","first-page":"329","volume-title":"Image inpainting via generative multi-column convolutional neural networks","author":"Y Wang","year":"2018","unstructured":"Wang Y, Tao X, Qi X, Shen X, Jia J (2018) Image inpainting via generative multi-column convolutional neural networks. Curran Associates Inc, Red Hook, pp 329\u2013338"},{"key":"1158_CR16","doi-asserted-by":"crossref","unstructured":"Zhu M, He D, Li X, Li C, Li F, Liu X, Ding E, Zhang Z (2021) Image inpainting by end-to-end cascaded refinement with mask awareness. In: IEEE transactions on image processing, pp 4855\u20134866","DOI":"10.1109\/TIP.2021.3076310"},{"key":"1158_CR17","doi-asserted-by":"publisher","first-page":"259","DOI":"10.1016\/j.neucom.2020.03.090","volume":"405","author":"M Chen","year":"2020","unstructured":"Chen M, Liu Z, Ye L, Wang Y (2020) Attentional coarse-and-fine generative adversarial networks for image inpainting. Neurocomputing 405:259\u2013269","journal-title":"Neurocomputing"},{"key":"1158_CR18","doi-asserted-by":"crossref","unstructured":"Shen L , Tao H, Ni Y, Wang Y, Stojanovic V (2023) Improved YOLOv3 model with feature map cropping for multi-scale road object detection. Meas Sci Technol 34(4)","DOI":"10.1088\/1361-6501\/acb075"},{"key":"1158_CR19","doi-asserted-by":"crossref","unstructured":"Han K, Wang Y, Tian Q, Guo J, Xu C (2020) GhostNet: more features from cheap operations. In: Proceedings of the 2021 CVPR, pp 1580\u20131589","DOI":"10.1109\/CVPR42600.2020.00165"},{"key":"1158_CR20","doi-asserted-by":"crossref","unstructured":"Hou Q, Zhou D, Feng J (2021) Coordinate attention for efficient mobile network design. In: Conference on computer vision and pattern recognition (CVPR), pp 13708\u201313717","DOI":"10.1109\/CVPR46437.2021.01350"},{"key":"1158_CR21","doi-asserted-by":"crossref","unstructured":"Zhou B, Lapedriza A, Khosla A, Oliva A, Torralba A (2017) Places: a 10 million image database for scene recognition. In: IEEE transactions on pattern analysis and machine intelligence, pp 1452\u20131464","DOI":"10.1109\/TPAMI.2017.2723009"},{"key":"1158_CR22","unstructured":"Karras T, Aila T, Laine S, Lehtinen J (2017) Progressive growing of GANs for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196"},{"issue":"4CD","key":"1158_CR23","first-page":"107.1","volume":"36","author":"S Iizuka","year":"2017","unstructured":"Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image completion. ACM Trans Graph (TOG) 36(4CD):107.1-107.14","journal-title":"ACM Trans Graph (TOG)"},{"key":"1158_CR24","doi-asserted-by":"crossref","unstructured":"Yan Z, Li X, Li M, Zuo W, Shan S (2018) Shift-net: image inpainting via deep feature rearrangement. In: Computer vision-ECCV, pp 3\u201319","DOI":"10.1007\/978-3-030-01264-9_1"},{"key":"1158_CR25","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijleo.2021.167101","volume":"242","author":"Y Shi","year":"2021","unstructured":"Shi Y, Fan Y, Zhang N (2021) A generative image inpainting network based on the attention transfer network across layer mechanism. Optik Int J Light Electron Opt 242:167101","journal-title":"Optik Int J Light Electron Opt"},{"key":"1158_CR26","doi-asserted-by":"publisher","DOI":"10.1007\/s10489-022-03387-6","author":"J Jiang","year":"2022","unstructured":"Jiang J, Dong X, Li T (2022) Parallel adaptive guidance network for image inpainting. Appl Intell. https:\/\/doi.org\/10.1007\/s10489-022-03387-6","journal-title":"Appl Intell"},{"key":"1158_CR27","doi-asserted-by":"crossref","unstructured":"Li J, Wang N, Zhang L, Du B, Tao D (2020) Recurrent feature reasoning for image inpainting. In: Proceedings of the 2020 CVPR, pp 7757\u20137765","DOI":"10.1109\/CVPR42600.2020.00778"},{"key":"1158_CR28","doi-asserted-by":"crossref","unstructured":"Guo Q, Li X, Juefei-Xu F, Yu H, Liu Y, Wang S (2021) JPGNet: joint predictive filtering and generative network for image inpainting. In: Proceedings of the 29th ACM International conference on multimedia, pp 386\u2013394","DOI":"10.1145\/3474085.3475170"},{"key":"1158_CR29","doi-asserted-by":"crossref","unstructured":"Matsui T, Ikehara M (2020) Single-image fence removal using deep convolutional neural network. In: IEEE Access, pp 38846\u201338854","DOI":"10.1109\/ACCESS.2019.2960087"},{"key":"1158_CR30","doi-asserted-by":"crossref","unstructured":"Ma C, Rao Y, Cheng Y, Chen C, Lu J, Zhou J (2020) Structure-preserving super resolution with gradient guidance. In: IEEE\/CVF conference on computer vision and pattern recognition (CVPR), pp 7766\u20137775","DOI":"10.1109\/CVPR42600.2020.00779"},{"key":"1158_CR31","doi-asserted-by":"crossref","unstructured":"Yuan J, Yu H (2019) Multi-scale generative model for image completion. In: Proceedings of 2019 2nd international conference on algorithms, computing and artificial intelligence (ACAI 2019), pp 21\u201330","DOI":"10.1145\/3377713.3377716"},{"key":"1158_CR32","doi-asserted-by":"crossref","unstructured":"Li T, Dong X, Lin H (2020) Guided depth map super-resolution using recumbent Y network. In: IEEE Access, pp 122695\u2013122708","DOI":"10.1109\/ACCESS.2020.3007667"},{"key":"1158_CR33","doi-asserted-by":"publisher","first-page":"259","DOI":"10.1016\/j.neucom.2020.03.090","volume":"405","author":"M Chen","year":"2020","unstructured":"Chen M, Liu Z, Ye L, Wang Y (2020) Attentional coarse- and-fine generative adversarial networks for image inpainting. Neurocomputing 405:259\u2013269","journal-title":"Neurocomputing"},{"key":"1158_CR34","doi-asserted-by":"crossref","unstructured":"Ji W, Li J, Yu S, Zhang M, Piao Y, Yao S, Cheng L (2021) Calibrated RGB-D salient object detection. In: Proceedings of the 2021 CVPR, 2021, pp 9471\u20139481","DOI":"10.1109\/CVPR46437.2021.00935"},{"key":"1158_CR35","doi-asserted-by":"crossref","unstructured":"Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super resolution. In: Proceedings of the 2016 ECCV, pp 694\u2013711","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"1158_CR36","unstructured":"Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arxiv:1409.1556"},{"key":"1158_CR37","doi-asserted-by":"crossref","unstructured":"Liu G, Reda F, Shih K, Wang T, Tao A, Catanzaro B (2018) Image inpainting for irregular holes using partial convolutions. In: Proceedings of the 2018 ECCV, pp 85\u2013100","DOI":"10.1007\/978-3-030-01252-6_6"},{"key":"1158_CR38","unstructured":"Kingma D, Adam J (2015) A method for stochastic optimization. In: Proceedings of the 2015 ICLR"},{"key":"1158_CR39","doi-asserted-by":"crossref","unstructured":"Zeng Y, Fu J, Chao H, Guo B (2019) Learning pyramid-context encoder network for high-quality image inpainting. In: Proceedings of the 2019 CVPR, pp 1486\u20131494","DOI":"10.1109\/CVPR.2019.00158"},{"issue":"4","key":"1158_CR40","doi-asserted-by":"publisher","first-page":"600","DOI":"10.1109\/TIP.2003.819861","volume":"13","author":"Z Wang","year":"2004","unstructured":"Wang Z, Bovik AC, Sheikh HR, Simoncelli EP et al (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600-612","journal-title":"IEEE Trans Image Process"},{"key":"1158_CR41","doi-asserted-by":"crossref","unstructured":"Zhang R, Isola P, Efros A, Shechtman E, Wang O (2018) The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the 2018 CVPR, pp 586\u2013595","DOI":"10.1109\/CVPR.2018.00068"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01158-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-023-01158-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01158-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,2,10]],"date-time":"2024-02-10T22:15:35Z","timestamp":1707603335000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-023-01158-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,20]]},"references-count":41,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2024,2]]}},"alternative-id":["1158"],"URL":"https:\/\/doi.org\/10.1007\/s40747-023-01158-5","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"type":"print","value":"2199-4536"},{"type":"electronic","value":"2198-6053"}],"subject":[],"published":{"date-parts":[[2023,7,20]]},"assertion":[{"value":"6 January 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 June 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"20 July 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}