{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,2,10]],"date-time":"2024-02-10T23:06:51Z","timestamp":1707606411205},"reference-count":46,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2022,4,26]],"date-time":"2022-04-26T00:00:00Z","timestamp":1650931200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,4,26]],"date-time":"2022-04-26T00:00:00Z","timestamp":1650931200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"national natural science foundation of china","doi-asserted-by":"publisher","award":["61901392"],"award-info":[{"award-number":["61901392"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"national natural science foundation of china","doi-asserted-by":"publisher","award":["11872069"],"award-info":[{"award-number":["11872069"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Appl Intell"],"published-print":{"date-parts":[[2023,1]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Motivated by human behavior, dividing inpainting tasks into structure reconstruction and texture generation helps to simplify restoration process and avoid distorted structures and blurry textures. However, most of tasks are ineffective for dealing with large continuous holes. In this paper, we devise a parallel adaptive guidance network(PAGN), which repairs structures and enriches textures through parallel branches, and several intermediate-level representations in different branches guide each other via the vertical skip connection and the guidance filter, ensuring that each branch only leverages the desirable features of another and outputs high-quality contents. Considering that the larger the missing regions are, less information is available. We promote the joint-contextual attention mechanism(Joint-CAM), which explores the connection between unknown and known patches by measuring their similarity at the same scale and at different scales, to utilize the existing messages fully. Since strong feature representation is essential for generating visually realistic and semantically reasonable contents in the missing regions, we further design attention-based multiscale perceptual res2blcok(AMPR) in the bottleneck that extracts features of various sizes at granular levels and obtains relatively precise object locations. Experiments on the public datasets CelebA-HQ, Places2, and Paris show that our proposed model is superior to state-of-the-art models, especially for filling large holes.<\/jats:p>","DOI":"10.1007\/s10489-022-03387-6","type":"journal-article","created":{"date-parts":[[2022,4,26]],"date-time":"2022-04-26T07:04:47Z","timestamp":1650956687000},"page":"1162-1179","update-policy":"http:\/\/dx.doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Parallel adaptive guidance network for image inpainting"],"prefix":"10.1007","volume":"53","author":[{"given":"Jinyang","family":"Jiang","sequence":"first","affiliation":[]},{"given":"Xiucheng","family":"Dong","sequence":"additional","affiliation":[]},{"given":"Tao","family":"Li","sequence":"additional","affiliation":[]},{"given":"Fan","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Hongjiang","family":"Qian","sequence":"additional","affiliation":[]},{"given":"Guifang","family":"Chen","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,4,26]]},"reference":[{"key":"3387_CR1","doi-asserted-by":"crossref","unstructured":"Shao H, Wang Y, Fu Y, Yin Z (2020) Generative image inpainting via edge structure and color aware fusion. Signal Process Image Commun 87(115929)","DOI":"10.1016\/j.image.2020.115929"},{"issue":"9","key":"3387_CR2","first-page":"1200","volume":"13","author":"A Criminisi","year":"2004","unstructured":"Criminisi A, Perez P, Toyama K (2004) Region filling and object removal by exemplar -based image inpainting. IEEE TIP 13(9):1200\u20131212","journal-title":"IEEE TIP"},{"key":"3387_CR3","doi-asserted-by":"crossref","unstructured":"Wang N, Wang W, Hu W, Fenster A, Li S (2021) Thanka Mural Inpainting Based on Multi-Scale Adaptive Partial Convolution and Stroke-Like Mask. IEEE TIP 30:3720\u20133733","DOI":"10.1109\/TIP.2021.3064268"},{"issue":"4","key":"3387_CR4","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1145\/2185520.2185578","volume":"31","author":"S Darabi","year":"2012","unstructured":"Darabi S, Shechtman E, Barnes C, Goldman DB, Sen P (2012) Image melding: Combining inconsistent images using patch-based synthesis. ACM TOG 31(4):82","journal-title":"ACM TOG"},{"key":"3387_CR5","doi-asserted-by":"crossref","unstructured":"Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018) The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In: Proceedings of the 2018 CVPR, pp 586\u2013595","DOI":"10.1109\/CVPR.2018.00068"},{"issue":"3","key":"3387_CR6","doi-asserted-by":"publisher","first-page":"24","DOI":"10.1145\/1531326.1531330","volume":"28","author":"C Barnes","year":"2009","unstructured":"Barnes C, Shechtman E, Finkelstein A, Goldman DB (2009) Patchmatch: a randomized correspondence algorithm for structural image editing. ACM ToG 28(3):24","journal-title":"ACM ToG"},{"key":"3387_CR7","doi-asserted-by":"crossref","unstructured":"Pathak D, Krahenb\u00fchl P, Donahue J, Darrell T, Efros AA (2016) Context Encoders: Feature Learning by Inpainting. In: Proceedings of the 2016 CVPR, pp 2536\u20132544","DOI":"10.1109\/CVPR.2016.278"},{"issue":"4","key":"3387_CR8","doi-asserted-by":"publisher","first-page":"107","DOI":"10.1145\/3072959.3073659","volume":"36","author":"S Iizuka","year":"2017","unstructured":"Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image completion. ACM Trans Graph (TOG) 36(4):107","journal-title":"ACM Trans Graph (TOG)"},{"key":"3387_CR9","doi-asserted-by":"crossref","unstructured":"Liu H, Jiang B, Xiao Y (2019) Coherent Semantic Attention for Image Inpainting. In: Proceedings of the 2019 ICCV, pp 4169\u20134178","DOI":"10.1109\/ICCV.2019.00427"},{"issue":"6","key":"3387_CR10","doi-asserted-by":"publisher","first-page":"1452","DOI":"10.1109\/TPAMI.2017.2723009","volume":"40","author":"B Zhou","year":"2017","unstructured":"Zhou B, Lapedriza A, Khosla A, Oliva A, Torralba A (2017) Places: a 10 million image database for scene recognition. IEEE TPAM 40(6):1452\u20131464","journal-title":"IEEE TPAM"},{"key":"3387_CR11","doi-asserted-by":"crossref","unstructured":"Sagong MC, Shin YG, Kim SW, Park S, Ko SJ (2019) PEPSI : Fast Image Inpainting With Parallel Decoding Network. In: Proceedings of the 2019 CVPR","DOI":"10.1109\/CVPR.2019.01162"},{"key":"3387_CR12","unstructured":"Christian S, Vincent V, Sergey I, Jonathon S, Zbigniew W (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the 2016 CVPR, pp 2818\u20132826"},{"key":"3387_CR13","unstructured":"Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D (2014) Generative adversarial nets. In: Proceedings of the 2014 NeurIPS, pp 2672\u20132680"},{"key":"3387_CR14","doi-asserted-by":"crossref","unstructured":"Zhang Q, Shen X, Xu L, Jia J (2014) Rolling Guidance Filter. In: Proceedings of the 2014 ECCV, pp 815\u2013830","DOI":"10.1007\/978-3-319-10578-9_53"},{"key":"3387_CR15","doi-asserted-by":"crossref","unstructured":"Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: Proceedings of the 2016 ECCV, pp 694\u2013711","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"3387_CR16","doi-asserted-by":"crossref","unstructured":"Liu G, Reda FA, Shih KJ, Wang TC, Tao A, Catanzaro B (2018) Image Inpainting for Irregular Holes Using Partial Convolutions. In: Proceedings of the 2018 ECCV, pp 85\u2013100","DOI":"10.1007\/978-3-030-01252-6_6"},{"key":"3387_CR17","unstructured":"Simonyan K, Zisserman A (2014), Very deep convolutional networks for Large-Scale image recognition. In: Proceedings of the 2014 ICLR"},{"key":"3387_CR18","unstructured":"Yu F, Koltun V (2016) Multi-Scale Context aggregation by dilated convolutions. In: Proceedings of the 2016 ICLR"},{"issue":"2","key":"3387_CR19","doi-asserted-by":"publisher","first-page":"652","DOI":"10.1109\/TPAMI.2019.2938758","volume":"43","author":"H Gao","year":"2019","unstructured":"Gao H, Chen M, Zhao K, Zhang Y, Yang H, Torr P (2019) Res2net: A New Multi-Scale Backbone Architecture. IEEE TPAM 43(2):652\u2013662","journal-title":"IEEE TPAM"},{"key":"3387_CR20","doi-asserted-by":"crossref","unstructured":"Isola P, Zhu J, Zhou T, Efros AA (2017) Image-to-Image Translation with Conditional Adversarial Networks. In: Proceedings of the 2017 CVPR, pp 5967\u20135976","DOI":"10.1109\/CVPR.2017.632"},{"key":"3387_CR21","doi-asserted-by":"crossref","unstructured":"Chen L-C, Zhu Y, Papandreou G, Schroff F, Adam H (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the 2018 ECCV, pp 801\u2013818","DOI":"10.1007\/978-3-030-01234-2_49"},{"key":"3387_CR22","doi-asserted-by":"crossref","unstructured":"Liu J, Jung C (2020) Facial image inpainting using attention-based multi-level generative network. Neurocomputing 437:95\u2013106","DOI":"10.1016\/j.neucom.2020.12.118"},{"key":"3387_CR23","unstructured":"Philbin J, Zisserman A The Paris Dataset, https:\/\/www.robots.ox.ac.uk\/~vgg\/data\/parisbuildings\/"},{"key":"3387_CR24","doi-asserted-by":"crossref","unstructured":"Zeng Y, Fu J, Chao H, Guo B (2019) Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting. In: Proceedings of the 2019 CVPR, pp 1486\u20131494","DOI":"10.1109\/CVPR.2019.00158"},{"key":"3387_CR25","doi-asserted-by":"crossref","unstructured":"Yu J, Lin Z, Yang J, Shen X, Lu X, Huang T (2018) Generative Image Inpainting with Contextual Attention. In: Proceedings of the 2018 CVPR, pp 5505\u20135514","DOI":"10.1109\/CVPR.2018.00577"},{"key":"3387_CR26","doi-asserted-by":"crossref","unstructured":"Yu J, Lin Z, Yang J, Shen X, Lu X, Huang T (2019) Free-Form Image Inpainting With Gated Convolution. In: Proceedings of the 2019 ICCV, pp 4470\u20134479","DOI":"10.1109\/ICCV.2019.00457"},{"key":"3387_CR27","doi-asserted-by":"publisher","first-page":"1691","DOI":"10.1007\/s00371-020-01932-3","volume":"37","author":"Y Chen","year":"2021","unstructured":"Chen Y, Liu L, Tao J, Xia R, Zhang Q, Yang K, Xiong J, Chen K (2021) The improved image inpainting algorithm via encoder and similarity constraint. Vis Comput 37:1691\u20131705","journal-title":"Vis Comput"},{"key":"3387_CR28","doi-asserted-by":"crossref","unstructured":"Xiong W, Yu J, Lin Z, Jiang J, Lu X, Barnes C, Luo J (2019) Foreground-Aware Image Inpainting(2019),in:Proceedings of the 2019 CVPR, pp 5833\u20135841","DOI":"10.1109\/CVPR.2019.00599"},{"key":"3387_CR29","unstructured":"Nazeri K, Ng E, Joseph T, Qureshi F, Ebrahimi M (2019) EdgeConnect: Generative Image Inpainting With Adversarial Edge Learning[J]. In: Proceedings of the 2019 ICCVW"},{"key":"3387_CR30","unstructured":"Wang Y, Tao X, Qi X, Shen X, Jia J (2018) Image inpainting via generative multi-column convolutional neural networks. Adv Neural Inf Process Syst:331\u2013340"},{"key":"3387_CR31","doi-asserted-by":"crossref","unstructured":"Guo Z, Chen Z, Yu T, Chen J, Liu S (2019) Progressive Image Inpainting with Full-Resolution Residual Network. In: Proceedings of the 27th ACM International Conference on Multimedia, pp 2496\u20132504","DOI":"10.1145\/3343031.3351022"},{"key":"3387_CR32","doi-asserted-by":"crossref","unstructured":"Li J, Wang N, Zhang L, Du B, Tao D (2020) Recurrent Feature Reasoning for Image Inpainting. In: Proceedings of the 2020 CVPR, pp 7757\u20137765","DOI":"10.1109\/CVPR42600.2020.00778"},{"key":"3387_CR33","doi-asserted-by":"crossref","unstructured":"Ren Y, Yu X, Zhang R (2019) StructureFlow: Image Inpainting via Structure-aware Appearance Flow. In: Proceedings of the 2019 ICCV, pp 181\u2013190","DOI":"10.1109\/ICCV.2019.00027"},{"key":"3387_CR34","doi-asserted-by":"publisher","first-page":"259","DOI":"10.1016\/j.neucom.2020.03.090","volume":"405","author":"M Chen","year":"2020","unstructured":"Chen M, Liu Z, Ye L, Wang Y (2020) Attentional coarse-and-fine generative adversarial networks for image inpainting. Neurocomputing 405:259\u2013269","journal-title":"Neurocomputing"},{"key":"3387_CR35","doi-asserted-by":"crossref","unstructured":"Liu H, Jiang B, Song Y, Huang W, Yang C (2020) Rethinking Image Inpainting via a Mutual Encoder-Decoder with Feature Equalizations. In: Proceedings of the 2020 ECCV, pp 725\u2013741","DOI":"10.1007\/978-3-030-58536-5_43"},{"key":"3387_CR36","doi-asserted-by":"crossref","unstructured":"Woo S, Park J, Lee JY CBAM: Convolutional block attention module (2018). In: Proceedings of the 2018 ECCV, pp 3\u201319","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"3387_CR37","doi-asserted-by":"crossref","unstructured":"Zheng C, Cham TJ, Cai J (2021) Pluralistic Free-Form Image Completion. Int J Comput Vis","DOI":"10.1007\/s11263-021-01502-7"},{"key":"3387_CR38","doi-asserted-by":"crossref","unstructured":"Li T, Dong X, Lin H (2020) Guided Depth Map Super-Resolution Using Recumbent Y Network. IEEE Access:122695\u2013122708","DOI":"10.1109\/ACCESS.2020.3007667"},{"key":"3387_CR39","doi-asserted-by":"publisher","first-page":"3460","DOI":"10.1007\/s10489-020-01971-2","volume":"51","author":"Y Chen","year":"2021","unstructured":"Chen Y, Zhang H, Liu L, Chen X, Zhang Q, Yang K, Xia R, Xie J (2021) Research on image Inpainting algorithm of improved GAN based on two-discriminations networks. Appl Intell 51:3460\u20133474","journal-title":"Appl Intell"},{"key":"3387_CR40","doi-asserted-by":"crossref","unstructured":"Zhu M, He D, Li X, Li C, Li F, Liu X, Ding E, Zhang Z Image inpainting by end-to-end cascaded refinement with mask awareness. IEEE Trans Image Process:4855\u20134866","DOI":"10.1109\/TIP.2021.3076310"},{"key":"3387_CR41","unstructured":"Kingma DP, Adam J. B. a. (2015) A Method for stochastic optimization. In: Proceedings of the 2015 ICLR"},{"key":"3387_CR42","doi-asserted-by":"crossref","unstructured":"Liu S, Huang D, Wang Y (2018) Receptive Field Block Net for Accurate and Fast Object Detection. In: Proceedings of the 2018 ECCV, pp 404\u2013419","DOI":"10.1007\/978-3-030-01252-6_24"},{"key":"3387_CR43","doi-asserted-by":"crossref","unstructured":"Mei Y, Fan Y, Zhou Y, Huang L, Huang T, Shi H (2020) Image Super-Resolution With Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining. In: Proceedings of the 2020 CVPR, pp. 5689\u20135698.","DOI":"10.1109\/CVPR42600.2020.00573"},{"key":"3387_CR44","doi-asserted-by":"publisher","first-page":"11217","DOI":"10.1007\/s00521-020-04702-3","volume":"32","author":"Y Ding","year":"2020","unstructured":"Ding Y, Lin L, Wang L, Zhang M, Li D (2020) Digging into the multi-scale structure for a more refined depth map and 3D reconstruction. Neural Comput Appl 32:11217\u201311228","journal-title":"Neural Comput Appl"},{"key":"3387_CR45","doi-asserted-by":"publisher","first-page":"1437","DOI":"10.1007\/s10489-019-01567-5","volume":"50","author":"C Wang","year":"2020","unstructured":"Wang C, Wu Y, Cai Y, Yao G, Wang ZH (2020) Single image deraining via deep pyramid network with spatial contextual information aggregation. Appl Intell 50:1437\u20131447","journal-title":"Appl Intell"},{"key":"3387_CR46","unstructured":"Karras T, Aila T, Laine S, Zhang M, Lehtinen J (2017) Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196"}],"container-title":["Applied Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-022-03387-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10489-022-03387-6\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-022-03387-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,3]],"date-time":"2023-01-03T04:59:16Z","timestamp":1672721956000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10489-022-03387-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,4,26]]},"references-count":46,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2023,1]]}},"alternative-id":["3387"],"URL":"https:\/\/doi.org\/10.1007\/s10489-022-03387-6","relation":{},"ISSN":["0924-669X","1573-7497"],"issn-type":[{"value":"0924-669X","type":"print"},{"value":"1573-7497","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,4,26]]},"assertion":[{"value":"13 February 2022","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"26 April 2022","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}