{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,3,22]],"date-time":"2025-03-22T12:06:44Z","timestamp":1742645204767,"version":"3.37.3"},"reference-count":39,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2023,1,20]],"date-time":"2023-01-20T00:00:00Z","timestamp":1674172800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,1,20]],"date-time":"2023-01-20T00:00:00Z","timestamp":1674172800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["11872069"],"award-info":[{"award-number":["11872069"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100017676","name":"Chunhui Project Foundation of the Education Department of China","doi-asserted-by":"publisher","award":["Z2017076"],"award-info":[{"award-number":["Z2017076"]}],"id":[{"id":"10.13039\/501100017676","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2023,8]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>For images corrupted for various reasons, the size of the corrupted area is often arbitrary and it has been a challenge to inpainting the larger missing areas. Though popular multistage networks ease the inpainting difficulty by repairing damaged image from coarse to fine, their common drawback is that the result of each stage is easily misguided by the wrong content generated in the previous stage. To address this problem, we propose a novel progressive guidance decoding network. First, multiple parallel decoding branches fill and refine the missing regions by top\u2013down passing the reconstructed priors. This inpainting way of progressive guidance avoids adverse effects of inappropriate premises, since the decoding branches can learn what priors can be utilized. And convolution layers of decoder with different locations would pass down the different priors. The joint guidance of features and gradient priors helps the inpainting result contains the correct structure and rich details. The second fold of progressive guidance is achieved by our fusing strategy, combining ghost convolution and the designed cascaded efficient channel attention (CECA) to fuse and reweight the features from different branches. CECA explores the dependencies among adjant and non-adjant channels more effectively than popular ones. Finally, we merges the different-scale feature maps reconstructed by the last decoding branch and mapping them to the image space, which further improves the semantic plausibility of the restoration results. Extensive experiments verify the effectiveness of our method in both subjective and objective evaluation.<\/jats:p>","DOI":"10.1007\/s40747-023-00966-z","type":"journal-article","created":{"date-parts":[[2023,1,20]],"date-time":"2023-01-20T20:54:27Z","timestamp":1674248067000},"page":"4555-4570","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Inpainting larger missing regions via progressive guidance decoding network"],"prefix":"10.1007","volume":"9","author":[{"given":"Xiucheng","family":"Dong","sequence":"first","affiliation":[]},{"given":"Jinyang","family":"Jiang","sequence":"additional","affiliation":[]},{"given":"Shuang","family":"Hou","sequence":"additional","affiliation":[]},{"given":"Chencheng","family":"Yang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,1,20]]},"reference":[{"key":"966_CR1","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2020.115929","volume":"87","author":"H Shao","year":"2020","unstructured":"Shao H, Wang Y, Fu Y, Yin Z (2020) Generative image inpainting via edge structure and color aware fusion. Signal Process Image Commun 87:115929","journal-title":"Signal Process Image Commun"},{"key":"966_CR2","doi-asserted-by":"crossref","unstructured":"Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2018) Generative image inpainting with contextual attention. In: Proceedings of the 2018 CVPR, pp 5505\u20135514","DOI":"10.1109\/CVPR.2018.00577"},{"key":"966_CR3","doi-asserted-by":"crossref","unstructured":"Yu J, Zhe L, Yang J, Shen X, Lu X, Huang T (2019) Free-form image inpainting with gated convolution. In: Proceedings of the 2019 ICCV, pp 4470\u20134479","DOI":"10.1109\/ICCV.2019.00457"},{"key":"966_CR4","doi-asserted-by":"crossref","unstructured":"Xiong W, Yu J, Lin Z, Jiang J, Lu X, Barnes C, Luo J (2019) Foreground-aware image inpainting. In: Proceedings of the 2019 CVPR, pp 5833\u20135841","DOI":"10.1109\/CVPR.2019.00599"},{"key":"966_CR5","unstructured":"Nazeri K, Ng E, Joseph T, Qureshi F, Ebrahimi M (2019) EdgeConnect: generative image inpainting with adversarial edge learning. In: Proceedings of the 2019 ICCVW"},{"key":"966_CR6","unstructured":"Wang, Y, Tao X, Qi X, Shen X, Jia J (2018) Image inpainting via generative multi-column convolutional neural networks. Adv Neural Inf Process Syst 331\u2013340"},{"key":"966_CR7","doi-asserted-by":"crossref","unstructured":"Ren Y, Yu X, Zhang R (2019) Structureflow: image inpainting via structure-aware appearance flow. In: Proceedings of the 2019 ICCV, pp 181\u2013190","DOI":"10.1109\/ICCV.2019.00027"},{"key":"966_CR8","doi-asserted-by":"crossref","unstructured":"Guo Z, Chen Z, Yu T, Chen J, Liu S (2019) Progressive image inpainting with full-resolution residual network. In: Proceedings of the 27th ACM International Conference on Multimedia, pp 2496\u20132504","DOI":"10.1145\/3343031.3351022"},{"key":"966_CR9","doi-asserted-by":"crossref","unstructured":"Li J, Wang N, Zhang L, Du B, Tao D (2020) Recurrent feature reasoning for image inpainting. In: Proceedings of the 2020 CVPR, pp 7757\u20137765","DOI":"10.1109\/CVPR42600.2020.00778"},{"key":"966_CR10","doi-asserted-by":"crossref","unstructured":"Zhu M, He D, Li X, Li C, Li F, Liu X, Ding E, Zhang Z (2021) Image inpainting by end-to-end cascaded refinement with mask awareness. IEEE Transactions on Image Processing, pp 4855\u20134866","DOI":"10.1109\/TIP.2021.3076310"},{"key":"966_CR11","doi-asserted-by":"crossref","unstructured":"Guo X, Yang H, Huang D (2021) Image inpainting via conditional texture and structure dual generation. In: Proceedings of the 2021 ICCV, pp 14114\u201314123","DOI":"10.1109\/ICCV48922.2021.01387"},{"key":"966_CR12","doi-asserted-by":"publisher","first-page":"259","DOI":"10.1016\/j.neucom.2020.03.090","volume":"405","author":"M Chen","year":"2020","unstructured":"Chen M, Liu Z, Ye L, Wang Y (2020) Attentional coarse-and-fine generative adversarial networks for image inpainting. Neurocomputing 405:259\u2013269","journal-title":"Neurocomputing"},{"key":"966_CR13","doi-asserted-by":"crossref","unstructured":"Han K, Wang Y, Tian Q, Guo J, Xu C (2020) GhostNet: More Features From Cheap Operations. in: Proceedings of the 2021 CVPR. pp. 1580-1589","DOI":"10.1109\/CVPR42600.2020.00165"},{"key":"966_CR14","doi-asserted-by":"publisher","unstructured":"Jiang J, Dong X, Fan Li, Zhang T, Qian H, Chen G (2022) Parallel Adaptive Guidance Network for Image Inpainting. Applied Intelligence. https:\/\/doi.org\/10.1007\/s0489-022-03387-6","DOI":"10.1007\/s0489-022-03387-6"},{"issue":"6","key":"966_CR15","doi-asserted-by":"publisher","first-page":"1452","DOI":"10.1109\/TPAMI.2017.2723009","volume":"40","author":"B Zhou","year":"2017","unstructured":"Zhou B, Lapedriza A, Khosla A, Oliva A, Torralba A (2017) Places: A 10 Million Image Database for Scene Recognition. IEEE TPAM 40(6):1452\u20131464","journal-title":"IEEE TPAM"},{"key":"966_CR16","unstructured":"Karras T, Aila T, Laine S, Lehtinen J (2017) Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196"},{"key":"966_CR17","doi-asserted-by":"crossref","unstructured":"Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016), Context Encoders: Feature Learning by Inpainting, in:Proceedings of the 2016 CVPR, pp. 2536-2544","DOI":"10.1109\/CVPR.2016.278"},{"key":"966_CR18","doi-asserted-by":"crossref","unstructured":"Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image completion, ACM Trans.Graphics(TOG) 36(4) 107","DOI":"10.1145\/3072959.3073659"},{"key":"966_CR19","doi-asserted-by":"crossref","unstructured":"Zeng Y, Fu J, Chao H, Guo B (2019), Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting, in: Proceedings of the 2019 CVPR, pp. 1486-1494","DOI":"10.1109\/CVPR.2019.00158"},{"key":"966_CR20","doi-asserted-by":"crossref","unstructured":"Liao L, Xiao J, Wang Z, Lin C-W, Satoh S (2020), Guidance and evaluation: semantic-aware image inpainting for mixed scenes. in: Proceedings of the 2020 ECCV, pp. 683-700","DOI":"10.1007\/978-3-030-58583-9_41"},{"key":"966_CR21","doi-asserted-by":"crossref","unstructured":"Wang J, Chen S, Wu Z, Jiang Y-G (2022), FT-TDR: Frequency-guided Transformer and Top-Down Refinement Network for Blind Face Inpainting. IEEE Transactions on Multimedia","DOI":"10.1109\/TMM.2022.3146774"},{"key":"966_CR22","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2020.107669","volume":"111","author":"Z Pei","year":"2021","unstructured":"Pei Z, Jin M, Zhang Y, Ma M, Yang Y-H (2021) All-in-focus synthetic aperture imaging using generative adversarial network-based semantic inpainting. Pattern Recognit. 111:107669","journal-title":"Pattern Recognit."},{"key":"966_CR23","first-page":"3720","volume":"30","author":"N Wang","year":"2021","unstructured":"Wang N, Wang W, Hu W, Fenster A, Li S (2021) Thanka Mural Inpainting Based on Multi-Scale Adaptive Partial Convolution and Stroke-Like Mask. IEEE TIP 30:3720\u20133733","journal-title":"IEEE TIP"},{"key":"966_CR24","doi-asserted-by":"crossref","unstructured":"Guo Q, Li X, Juefei-Xu F, Yu H, Liu Y, Wang S (2021), JPGNet: joint predictive fifiltering and generative network for image inpainting, in: Proceedings of the 29th ACM International Conference on Multimedia, pp. 386-394","DOI":"10.1145\/3474085.3475170"},{"key":"966_CR25","doi-asserted-by":"crossref","unstructured":"Liu S, Huang D, Wang Y (2018), Receptive Field Block Net for Accurate and Fast Object Detection, in: Proceedings of the 2018 ECCV, pp. 404-419","DOI":"10.1007\/978-3-030-01252-6_24"},{"key":"966_CR26","unstructured":"Christian S, Vincent V, Sergey I, Jonathon S, Zbigniew W (2016) Rethinking the inception architecture for computer vision, In :Proceedings of the 2016 CVPR, pp. 2818-2826"},{"key":"966_CR27","doi-asserted-by":"crossref","unstructured":"Chen L-C, Zhu Y, Papandreou G, Schroff F, Adam H (2018), Encoder-decoder with atrous separable convolution for semantic image segmentation, in: Proceedings of the 2018 ECCV,pp. 801-818","DOI":"10.1007\/978-3-030-01234-2_49"},{"key":"966_CR28","doi-asserted-by":"crossref","unstructured":"Gao H, Chen M, Zhao K, Zhang Y, Yang H, Torr P (2019) Res2Net: A New Multi-Scale Backbone Architecture. IEEE TPAM 43(2):652\u2013662","DOI":"10.1109\/TPAMI.2019.2938758"},{"key":"966_CR29","doi-asserted-by":"crossref","unstructured":"Woo S, Park J, Lee JY (2018) CBAM: Convolutional block attention module, in: Proceedings of the 2018 ECCV, pp. 3-19","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"966_CR30","doi-asserted-by":"crossref","unstructured":"Li T, Dong X, Lin H (2020) Guided Depth Map Super-Resolution Using Recumbent Y Network, IEEE Access, pp. 122695-122708","DOI":"10.1109\/ACCESS.2020.3007667"},{"key":"966_CR31","first-page":"271","volume-title":"in Pattern Classification and Scene Analysis","author":"RO Duda","year":"1973","unstructured":"Duda RO, Hart PE (1973) in Pattern Classification and Scene Analysis. John Wiley and Sons, New York, pp 271\u2013272"},{"key":"966_CR32","doi-asserted-by":"crossref","unstructured":"Wang Q, Wu B, Zhu P, Li P, Hu Q (2020), ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks, in: Proceedings of the 2020 CVPR, pp. 95-106","DOI":"10.1109\/CVPR42600.2020.01155"},{"key":"966_CR33","doi-asserted-by":"crossref","unstructured":"Ji W, Li J, Yu S, Zhang M, Piao Y, Yao S, Cheng L (2021), Calibrated RGB-D Salient Object Detection. in: Proceedings of the 2021 CVPR, pp. 9471-9481","DOI":"10.1109\/CVPR46437.2021.00935"},{"key":"966_CR34","doi-asserted-by":"crossref","unstructured":"Johnson J, Alahi A, Fei-Fei L (2016), Perceptual losses for real-time style transfer and super-resolution, in:Proceedings of the 2016 ECCV, pp. 694-711","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"966_CR35","unstructured":"Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D (2014) Generative adversarial nets, in: Proceedings of the 2014 NeurIPS, pp. 2672-2680"},{"key":"966_CR36","unstructured":"Simonyan K, Zisserman A (2014) Very Deep Convolutional Networks for Large-Scale Image Recognition, in: Proceedings of the 2014 ICLR"},{"key":"966_CR37","doi-asserted-by":"crossref","unstructured":"Liu G, Reda FA, Shih KJ, Wang TC, Tao A, Catanzaro B (2018), Image Inpainting for Irregular Holes Using Partial Convolutions, in: Proceedings of the 2018 ECCV, pp. 85-100","DOI":"10.1007\/978-3-030-01252-6_6"},{"key":"966_CR38","unstructured":"Kingma DP, Adam JBa (2015) A method for stochastic optimization, in: Proceedings of the 2015 ICLR"},{"key":"966_CR39","doi-asserted-by":"crossref","unstructured":"Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018), The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. in:Proceedings of the 2018 CVPR, pp.586-595","DOI":"10.1109\/CVPR.2018.00068"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-00966-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-023-00966-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-00966-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,7,27]],"date-time":"2023-07-27T13:32:55Z","timestamp":1690464775000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-023-00966-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,1,20]]},"references-count":39,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2023,8]]}},"alternative-id":["966"],"URL":"https:\/\/doi.org\/10.1007\/s40747-023-00966-z","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"type":"print","value":"2199-4536"},{"type":"electronic","value":"2198-6053"}],"subject":[],"published":{"date-parts":[[2023,1,20]]},"assertion":[{"value":"13 July 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"1 January 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"20 January 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}