{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,4]],"date-time":"2026-03-04T22:40:00Z","timestamp":1772664000659,"version":"3.50.1"},"reference-count":31,"publisher":"MDPI AG","issue":"10","license":[{"start":{"date-parts":[[2022,10,14]],"date-time":"2022-10-14T00:00:00Z","timestamp":1665705600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62171327"],"award-info":[{"award-number":["62171327"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["B210610"],"award-info":[{"award-number":["B210610"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Hubei Nuclear Power Operation Engineering Technology Research Center","award":["62171327"],"award-info":[{"award-number":["62171327"]}]},{"name":"Hubei Nuclear Power Operation Engineering Technology Research Center","award":["B210610"],"award-info":[{"award-number":["B210610"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Symmetry"],"abstract":"<jats:p>Recently, learning-based image completion methods have made encouraging progress on square or irregular masks. The generative adversarial networks (GANs) have been able to produce visually realistic and semantically correct results. However, much texture and structure information will be lost in the completion process. If the missing part is too large to provide useful information, the result will be ambiguity, residual shadow, and object confusion. In order to complete large mask images, we present a novel model using conditional GAN called coarse-to-fine condition GAN (CF CGAN). We use a coarse-to-fine generator with symmetry and new perceptual loss based on VGG-16. The generator is symmetric in structure. For large mask image completion, our method produces visually realistic and semantically correct results. The generalization ability of our model is also excellent. We evaluate our model on the CelebA dataset and use FID, LPIPS, and SSIM as the metrics. Experiments demonstrate superior performance in terms of both quality and reality in free-form image completion.<\/jats:p>","DOI":"10.3390\/sym14102148","type":"journal-article","created":{"date-parts":[[2022,10,17]],"date-time":"2022-10-17T05:08:02Z","timestamp":1665983282000},"page":"2148","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":8,"title":["Large Mask Image Completion with Conditional GAN"],"prefix":"10.3390","volume":"14","author":[{"given":"Changcheng","family":"Shao","sequence":"first","affiliation":[{"name":"College of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430000, China"}]},{"given":"Xiaolin","family":"Li","sequence":"additional","affiliation":[{"name":"College of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430000, China"}]},{"given":"Fang","family":"Li","sequence":"additional","affiliation":[{"name":"College of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430000, China"}]},{"given":"Yifan","family":"Zhou","sequence":"additional","affiliation":[{"name":"College of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430000, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,10,14]]},"reference":[{"key":"ref_1","first-page":"3638","article-title":"Adaptive GNN for image analysis and editing","volume":"32","author":"Liang","year":"2019","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Absetan, A., and Fathi, A. (2022). Integration of Deep Learned and Handcrafted Features for Image Retargeting Quality Assessment. Cybern. Syst., 1\u201324.","DOI":"10.1080\/01969722.2022.2071408"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"43","DOI":"10.1109\/TBC.2021.3113280","article-title":"Stereoars: Quality evaluation for stereoscopic image retargeting with binocular inconsistency detection","volume":"68","author":"Jiang","year":"2021","journal-title":"IEEE Trans. Broadcast."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"4-es","DOI":"10.1145\/1276377.1276382","article-title":"Scene completion using millions of photographs","volume":"26","author":"Hays","year":"2007","journal-title":"ACM Trans. Graph. (ToG)"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1200","DOI":"10.1109\/TIP.2004.833105","article-title":"Region filling and object removal by exemplar-based image inpainting","volume":"13","author":"Criminisi","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Liao, L., Hu, R., Xiao, J., and Wang, Z. (2018, January 15\u201320). Edge-aware context encoder for image inpainting. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada.","DOI":"10.1109\/ICASSP.2018.8462549"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., and Efros, A.A. (2016, January 27\u201330). Context encoders: Feature learning by inpainting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.278"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Bertalmio, M., Sapiro, G., Caselles, V., and Ballester, C. (2000, January 23\u201328). Image inpainting. Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA.","DOI":"10.1145\/344779.344972"},{"key":"ref_9","unstructured":"Zhao, S., Cui, J., Sheng, Y., Dong, Y., Liang, X., Chang, E.I., and Xu, Y. (2021). Large scale image completion via co-modulated generative adversarial networks. arXiv."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.Y., Zhou, T., and Efros, A.A. (2017, January 21\u201326). Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., and Huang, T.S. (2018, January 18\u201322). Generative image inpainting with contextual attention. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00577"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., and Catanzaro, B. (2018, January 18\u201322). High-resolution image synthesis and semantic manipulation with conditional gans. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00917"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Suvorov, R., Logacheva, E., Mashikhin, A., Remizova, A., Ashukha, A., Silvestrov, A., Kong, N., Goka, H., Park, K., and Lempitsky, V. (2022, January 4\u20138). Resolution-robust large mask inpainting with fourier convolutions. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA.","DOI":"10.1109\/WACV51458.2022.00323"},{"key":"ref_14","unstructured":"Sun, J., Bhattarai, B., Chen, Z., and Kim, T.K. (2021). SeCGAN: Parallel Conditional Generative Adversarial Networks for Face Editing via Semantic Consistency. arXiv."},{"key":"ref_15","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Walia, S., Kumar, K., Agarwal, S., and Kim, H. (2022). Using XAI for Deep Learning-Based Image Manipulation Detection with Shapley Additive Explanation. Symmetry, 14.","DOI":"10.3390\/sym14081611"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Umair, M., Hashmani, M.A., Hussain Rizvi, S.S., Taib, H., Abdullah, M.N., and Memon, M.M. (2022). A Novel Deep Learning Model for Sea State Classification Using Visual-Range Sea Images. Symmetry, 14.","DOI":"10.3390\/sym14071487"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3072959.3073659","article-title":"Globally and locally consistent image completion","volume":"36","author":"Iizuka","year":"2017","journal-title":"ACM Trans. Graph. (ToG)"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"2786","DOI":"10.1007\/s11263-021-01502-7","article-title":"Pluralistic free-form image completion","volume":"129","author":"Zheng","year":"2021","journal-title":"Int. J. Comput. Vis."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., and Li, H. (2017, January 21\u201326). High-resolution image inpainting using multi-scale neural patch synthesis. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.434"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Dolhansky, B., and Ferrer, C.C. (2018, January 18\u201322). Eye in-painting with exemplar generative adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00824"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Liao, H., Funka-Lea, G., Zheng, Y., Luo, J., and Kevin Zhou, S. (2018, January 2\u20136). Face completion with semantic knowledge and collaborative adversarial learning. Proceedings of the Asian Conference on Computer Vision, Perth, Australia.","DOI":"10.1007\/978-3-030-20887-5_24"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Yeh, R.A., Chen, C., Yian Lim, T., Schwing, A.G., Hasegawa-Johnson, M., and Do, M.N. (2017, January 21\u201326). Semantic image inpainting with deep generative models. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.728"},{"key":"ref_24","unstructured":"Mescheder, L., Geiger, A., and Nowozin, S. (2018, January 10\u201315). Which training methods for GANs do actually converge?. Proceedings of the International Conference on Machine Learning, Stockholm, Sweden."},{"key":"ref_25","first-page":"4479","article-title":"Fast fourier convolution","volume":"33","author":"Chi","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"532","DOI":"10.1109\/TCOM.1983.1095851","article-title":"The Laplacian pyramid as a compact image code","volume":"31","author":"Burt","year":"1983","journal-title":"IEEE Trans. Commun."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Zhang, R., Isola, P., Efros, A.A., Shechtman, E., and Wang, O. (2018, January 18\u201322). The unreasonable effectiveness of deep features as a perceptual metric. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00068"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Gatys, L.A., Ecker, A.S., and Bethge, M. (2015). A neural algorithm of artistic style. arXiv.","DOI":"10.1167\/16.12.326"},{"key":"ref_29","unstructured":"Karras, T., Aila, T., Laine, S., and Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Hore, A., and Ziou, D. (2010, January 23\u201326). Image quality metrics: PSNR vs. SSIM. Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey.","DOI":"10.1109\/ICPR.2010.579"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Schroff, F., Kalenichenko, D., and Philbin, J. (2015, January 7\u201312). Facenet: A unified embedding for face recognition and clustering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298682"}],"container-title":["Symmetry"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2073-8994\/14\/10\/2148\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:53:55Z","timestamp":1760144035000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2073-8994\/14\/10\/2148"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,14]]},"references-count":31,"journal-issue":{"issue":"10","published-online":{"date-parts":[[2022,10]]}},"alternative-id":["sym14102148"],"URL":"https:\/\/doi.org\/10.3390\/sym14102148","relation":{},"ISSN":["2073-8994"],"issn-type":[{"value":"2073-8994","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,10,14]]}}}