{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,28]],"date-time":"2025-11-28T12:31:00Z","timestamp":1764333060745,"version":"build-2065373602"},"reference-count":50,"publisher":"MDPI AG","issue":"8","license":[{"start":{"date-parts":[[2021,8,10]],"date-time":"2021-08-10T00:00:00Z","timestamp":1628553600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61907025, 61807020, 61702278"],"award-info":[{"award-number":["61907025, 61807020, 61702278"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Natural Science Foundation of Jiangsu Higher Education Institutions of China","award":["19KJB520048"],"award-info":[{"award-number":["19KJB520048"]}]},{"name":"NUPTSF","award":["NY219069"],"award-info":[{"award-number":["NY219069"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Algorithms"],"abstract":"<jats:p>Recently, deep learning has enabled a huge leap forward in image inpainting. However, due to the memory and computational limitation, most existing methods are able to handle only low-resolution inputs, typically less than 1 K. With the improvement of Internet transmission capacity and mobile device cameras, the resolution of image and video sources available to users via the cloud or locally is increasing. For high-resolution images, the common inpainting methods simply upsample the inpainted result of the shrinked image to yield a blurry result. In recent years, there is an urgent need to reconstruct the missing high-frequency information in high-resolution images and generate sharp texture details. Hence, we propose a general deep learning framework for high-resolution image inpainting, which first hallucinates a semantically continuous blurred result using low-resolution inpainting and suppresses computational overhead. Then the sharp high-frequency details with original resolution are reconstructed using super-resolution refinement. Experimentally, our method achieves inspiring inpainting quality on 2K and 4K resolution images, ahead of the state-of-the-art high-resolution inpainting technique. This framework is expected to be popularized for high-resolution image editing tasks on personal computers and mobile devices in the future.<\/jats:p>","DOI":"10.3390\/a14080236","type":"journal-article","created":{"date-parts":[[2021,8,10]],"date-time":"2021-08-10T02:15:00Z","timestamp":1628561700000},"page":"236","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["SR-Inpaint: A General Deep Learning Framework for High Resolution Image Inpainting"],"prefix":"10.3390","volume":"14","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9245-7611","authenticated-orcid":false,"given":"Haoran","family":"Xu","sequence":"first","affiliation":[{"name":"School of Electronic and Optical Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210049, China"},{"name":"School of Microelectronics, Nanjing University of Posts and Telecommunications, Nanjing 210049, China"}]},{"given":"Xinya","family":"Li","sequence":"additional","affiliation":[{"name":"School of Educational Science and Technology, Nanjing University of Posts and Telecommunications, Nanjing 210049, China"}]},{"given":"Kaiyi","family":"Zhang","sequence":"additional","affiliation":[{"name":"School of Overseas Education, Nanjing University of Posts and Telecommunications, Nanjing 210049, China"}]},{"given":"Yanbai","family":"He","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China"}]},{"given":"Haoran","family":"Fan","sequence":"additional","affiliation":[{"name":"School of Electronic and Optical Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210049, China"},{"name":"School of Microelectronics, Nanjing University of Posts and Telecommunications, Nanjing 210049, China"}]},{"given":"Sijiang","family":"Liu","sequence":"additional","affiliation":[{"name":"School of Educational Science and Technology, Nanjing University of Posts and Telecommunications, Nanjing 210049, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3887-5438","authenticated-orcid":false,"given":"Chuanyan","family":"Hao","sequence":"additional","affiliation":[{"name":"School of Educational Science and Technology, Nanjing University of Posts and Telecommunications, Nanjing 210049, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6451-7565","authenticated-orcid":false,"given":"Bo","family":"Jiang","sequence":"additional","affiliation":[{"name":"School of Educational Science and Technology, Nanjing University of Posts and Telecommunications, Nanjing 210049, China"}]}],"member":"1968","published-online":{"date-parts":[[2021,8,10]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1200","DOI":"10.1109\/83.935036","article-title":"Filling-in by joint interpolation of vector fields and gray levels","volume":"10","author":"Ballester","year":"2001","journal-title":"IEEE Trans. Image Process."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"353","DOI":"10.1017\/S0956792502004904","article-title":"Digital inpainting based on the Mumford\u2013Shah\u2013Euler image model","volume":"13","author":"Esedoglu","year":"2002","journal-title":"Eur. J. Appl. Math."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1273","DOI":"10.1109\/TCSVT.2007.903663","article-title":"Image compression with edge-based inpainting","volume":"17","author":"Liu","year":"2007","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_4","first-page":"2229","article-title":"Region filling and object removal by exemplar-based image inpainting","volume":"3","author":"Waykule","year":"2012","journal-title":"Int. J. Sci. Eng. Res."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"He, K., and Sun, J. (2012, January 7\u201313). Statistics of patch offsets for image completion. Proceedings of the European Conference on Computer Vision, Florence, Italy.","DOI":"10.1007\/978-3-642-33709-3_2"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Drori, I., Cohen-Or, D., and Yeshurun, H. (2003). Fragment-based image completion. ACM SIGGRAPH 2003 Papers, ACM.","DOI":"10.1145\/1201775.882267"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Wilczkowiak, M., Brostow, G.J., Tordoff, B., and Cipolla, R. (2005, January 5\u20138). Hole filling through photomontage. Proceedings of the BMVC 2005-Proceedings of the British Machine Vision Conference, Oxford, UK.","DOI":"10.5244\/C.19.52"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"1153","DOI":"10.1109\/TIP.2010.2042098","article-title":"Image inpainting by patch propagation using patch sparsity","volume":"19","author":"Xu","year":"2010","journal-title":"IEEE Trans. Image Process."},{"key":"ref_9","unstructured":"Nazeri, K., Ng, E., Joseph, T., Qureshi, F.Z., and Ebrahimi, M. (2019). Edgeconnect: Generative image inpainting with adversarial edge learning. arXiv."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Yeh, R.A., Chen, C., Lim, T.Y., Schwing, A.G., Hasegawa-Johnson, M., and Do, M.N. (2017). Semantic Image Inpainting with Deep Generative Models. arXiv.","DOI":"10.1109\/CVPR.2017.728"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., and Huang, T.S. (2018). Generative Image Inpainting with Contextual Attention. arXiv.","DOI":"10.1109\/CVPR.2018.00577"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3072959.3073659","article-title":"Globally and locally consistent image completion","volume":"36","author":"Iizuka","year":"2017","journal-title":"ACM Trans. Graph."},{"key":"ref_13","first-page":"1747","article-title":"Pixel Recurrent Neural Networks","volume":"Volume 48","author":"Balcan","year":"2016","journal-title":"Machine Learning Research, Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 20\u201322 June 2016"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Liao, L., Hu, R., Xiao, J., and Wang, Z. (2018, January 15\u201320). Edge-Aware Context Encoder for Image Inpainting. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada.","DOI":"10.1109\/ICASSP.2018.8462549"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., and Efros, A.A. (2016, January 27\u201330). Context Encoders: Feature Learning by Inpainting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.278"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Xiong, W., Yu, J., Lin, Z., Yang, J., Lu, X., Barnes, C., and Luo, J. (2019, January 15\u201320). Foreground-Aware Image Inpainting. Proceedings of the 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00599"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Yi, Z., Tang, Q., Azizi, S., Jang, D., and Xu, Z. (2020, January 14\u201319). Contextual Residual Aggregation for Ultra High-Resolution Image Inpainting. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00753"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., and Li, H. (2017, January 21\u201326). High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.434"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Ikehata, S., Cho, J.H., and Aizawa, K. (2013, January 15\u201318). Depth map inpainting and super-resolution based on internal statistics of geometry and appearance. Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, Australia.","DOI":"10.1109\/ICIP.2013.6738194"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Kim, S.Y., Aberman, K., Kanazawa, N., Garg, R., Wadhwa, N., Chang, H., Karnad, N., Kim, M., and Liba, O. (2021). Zoom-to-Inpaint: Image Inpainting with High-Frequency Details. arXiv.","DOI":"10.1109\/CVPRW56347.2022.00063"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Efros, A.A., and Freeman, W.T. (2001, January 12\u201317). Image Quilting for Texture Synthesis and Transfer. Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA.","DOI":"10.1145\/383259.383296"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Efros, A., and Leung, T. (1999, January 20\u201327). Texture synthesis by non-parametric sampling. Proceedings of the Seventh IEEE International Conference on Computer Vision, Corfu, Greece.","DOI":"10.1109\/ICCV.1999.790383"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"63514","DOI":"10.1109\/ACCESS.2020.2982224","article-title":"A State-of-the-Art Review on Image Synthesis With Generative Adversarial Networks","volume":"8","author":"Wang","year":"2020","journal-title":"IEEE Access"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Pan, X., Zhan, X., Dai, B., Lin, D., Loy, C.C., and Luo, P. (2020). Exploiting Deep Generative Prior for Versatile Image Restoration and Manipulation. arXiv.","DOI":"10.1007\/978-3-030-58536-5_16"},{"key":"ref_25","unstructured":"Brock, A., Donahue, J., and Simonyan, K. (2019). Large Scale GAN Training for High Fidelity Natural Image Synthesis. arXiv."},{"key":"ref_26","unstructured":"Karras, T., Aila, T., Laine, S., and Lehtinen, J. (2018). Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv."},{"key":"ref_27","unstructured":"Radford, A., Metz, L., and Chintala, S. (2015). Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., and Metaxas, D. (2017, January 22\u201329). StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.629"},{"key":"ref_29","unstructured":"Huang, T.S. (2021, July 15). Advances in Computer Vision and Image Processing: A Research Annual: Image Enhancement and Restoration, v. 2. Available online: http:\/\/a.xueshu.baidu.com\/usercenter\/paper\/show?paperid=ff63d1c895dbe3a66d889dbc93368fad."},{"key":"ref_30","unstructured":"Stark, H., and Yang, Y. (1998). Vector Space Projections: A Numerical Approach to Signal and Image Processing, Neural Nets, and Optics, Wiley-Interscience."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"295","DOI":"10.1109\/TPAMI.2015.2439281","article-title":"Image Super-Resolution Using Deep Convolutional Networks","volume":"38","author":"Dong","year":"2016","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2015, January 7\u201313). Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile.","DOI":"10.1109\/ICCV.2015.123"},{"key":"ref_33","first-page":"1162","article-title":"Light Field Super-Resolution Using a Low-Rank Prior and Deep Convolutional Neural Networks","volume":"42","author":"Farrugia","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Kim, J., Lee, J.K., and Lee, K.M. (2016). Accurate Image Super-Resolution Using Very Deep Convolutional Networks. arXiv.","DOI":"10.1109\/CVPR.2016.182"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Dong, C., Loy, C.C., and Tang, X. (2016). Accelerating the Super-Resolution Convolutional Neural Network. arXiv.","DOI":"10.1007\/978-3-319-46475-6_25"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Talab, M.A., Awang, S., and Najim, S.A.d.M. (2019, January 29). Super-Low Resolution Face Recognition using Integrated Efficient Sub-Pixel Convolutional Neural Network (ESPCN) and Convolutional Neural Network (CNN). Proceedings of the 2019 IEEE International Conference on Automatic Control and Intelligent Systems (I2CACIS), Shah Alam, Malaysia.","DOI":"10.1109\/I2CACIS.2019.8825083"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Lim, B., Son, S., Kim, H., Nah, S., and Lee, K.M. (2017, January 21\u201326). Enhanced Deep Residual Networks for Single Image Super-Resolution. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA.","DOI":"10.1109\/CVPRW.2017.151"},{"key":"ref_38","unstructured":"Pl\u00f6tz, T., and Roth, S. (2018). Neural Nearest Neighbors Networks. arXiv."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Tian, Y., Kong, Y., Zhong, B., and Fu, Y. (2018, January 18\u201323). Residual Dense Network for Image Super-Resolution. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00262"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Zhang, K., Zuo, W., and Zhang, L. (2019). Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels. arXiv.","DOI":"10.1109\/CVPR.2019.00177"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Johnson, J., Alahi, A., and Fei-Fei, L. (2016). Perceptual Losses for Real-Time Style Transfer and Super-Resolution. arXiv.","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep Residual Learning for Image Recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.Y., Zhou, T., and Efros, A.A. (2018). Image-to-Image Translation with Conditional Adversarial Networks. arXiv.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Zhu, J.Y., Park, T., Isola, P., and Efros, A.A. (2017, January 22\u201329). Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.244"},{"key":"ref_45","unstructured":"Kingma, D.P., and Ba, J. (2017). Adam: A Method for Stochastic Optimization. arXiv."},{"key":"ref_46","unstructured":"Timofte, R., Gu, S., Wu, J., Van Gool, L., Zhang, L., Yang, M.H., Haris, M., Shakhnarovich, G., Ukita, N., and Hu, S. (2018, January 18\u201322). NTIRE 2018 Challenge on Single Image Super-Resolution: Methods and Results. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"8","DOI":"10.4236\/jcc.2019.73002","article-title":"Image quality assessment through FSIM, SSIM, MSE and PSNR\u2014A comparative study","volume":"7","author":"Sara","year":"2019","journal-title":"J. Comput. Commun."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"1398","DOI":"10.1109\/ACSSC.2003.1292216","article-title":"Multiscale structural similarity for image quality assessment","volume":"Volume 2","author":"Wang","year":"2003","journal-title":"Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers"},{"key":"ref_49","unstructured":"Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. arXiv."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016, January 27\u201330). Rethinking the Inception Architecture for Computer Vision. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.308"}],"container-title":["Algorithms"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-4893\/14\/8\/236\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T06:43:14Z","timestamp":1760164994000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-4893\/14\/8\/236"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,8,10]]},"references-count":50,"journal-issue":{"issue":"8","published-online":{"date-parts":[[2021,8]]}},"alternative-id":["a14080236"],"URL":"https:\/\/doi.org\/10.3390\/a14080236","relation":{},"ISSN":["1999-4893"],"issn-type":[{"type":"electronic","value":"1999-4893"}],"subject":[],"published":{"date-parts":[[2021,8,10]]}}}