{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T03:04:38Z","timestamp":1760151878871,"version":"build-2065373602"},"reference-count":26,"publisher":"MDPI AG","issue":"20","license":[{"start":{"date-parts":[[2022,10,20]],"date-time":"2022-10-20T00:00:00Z","timestamp":1666224000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>An automatic colorization algorithm can convert a grayscale image to a colorful image using regression loss functions or classification loss functions. However, the regression loss function leads to brown results, while the classification loss function leads to the problem of color overflow and the computation of the color categories and balance weights of the ground truth required for the weighted classification loss is too large. In this paper, we propose a new method to compute color categories and balance the weights of color images. In this paper, we propose a new method to compute color categories and balance weights of color images. Furthermore, we propose a U-Net-based colorization network. First, we propose a category conversion module and a category balance module to obtain the color categories and to balance weights, which dramatically reduces the training time. Second, we construct a classification subnetwork to constrain the colorization network with category loss, which improves the colorization accuracy and saturation. Finally, we introduce an asymmetric feature fusion (AFF) module to fuse the multiscale features, which effectively prevents color overflow and improves the colorization effect. The experiments show that our colorization network has peak signal-to-noise ratio (PSNR) and structure similarity index measure (SSIM) metrics of 25.8803 and 0.9368, respectively, for the ImageNet dataset. As compared with existing algorithms, our algorithm produces colorful images with vivid colors, no significant color overflow, and higher saturation.<\/jats:p>","DOI":"10.3390\/s22208010","type":"journal-article","created":{"date-parts":[[2022,10,21]],"date-time":"2022-10-21T00:34:30Z","timestamp":1666312470000},"page":"8010","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Colorful Image Colorization with Classification and Asymmetric Feature Fusion"],"prefix":"10.3390","volume":"22","author":[{"given":"Zhiyuan","family":"Wang","sequence":"first","affiliation":[{"name":"Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China"},{"name":"University of Chinese Academy of Sciences, Beijing 100049, China"}]},{"given":"Yi","family":"Yu","sequence":"additional","affiliation":[{"name":"Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China"}]},{"given":"Daqun","family":"Li","sequence":"additional","affiliation":[{"name":"Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China"}]},{"given":"Yuanyuan","family":"Wan","sequence":"additional","affiliation":[{"name":"Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8927-6406","authenticated-orcid":false,"given":"Mingyang","family":"Li","sequence":"additional","affiliation":[{"name":"Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China"},{"name":"University of Chinese Academy of Sciences, Beijing 100049, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,10,20]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"689","DOI":"10.1145\/1015706.1015780","article-title":"Colorization using optimization","volume":"23","author":"Levin","year":"2004","journal-title":"ACM Trans. Graph."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1120","DOI":"10.1109\/TIP.2005.864231","article-title":"Fast image and video colorization using chrominance blending","volume":"15","author":"Yatziv","year":"2006","journal-title":"IEEE Trans. Image Process."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1214","DOI":"10.1145\/1141911.1142017","article-title":"Manga Colorization","volume":"25","author":"Qu","year":"2006","journal-title":"ACM Trans. Graph."},{"key":"ref_4","unstructured":"Luan, Q., Wen, F., Cohen-Or, D., Liang, L., Xu, Y.-Q., and Shum, H.-Y. (2007, January 25). Natural Image Colorization. Proceedings of the 18th Eurographics Conference on Rendering Techniques, Grenoble, France."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"277","DOI":"10.1145\/566654.566576","article-title":"Transferring color to greyscale images","volume":"21","author":"Welsh","year":"2002","journal-title":"ACM Trans. Graph."},{"key":"ref_6","unstructured":"Bala, K., and Dutre, P. (July, January 29). Colorization by Example. Proceedings of the Eurographics Symposium on Rendering (2005), Konstanz, Germany."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Liu, X., Wan, L., Qu, Y., Wong, T.-T., Lin, S., Leung, C.-S., and Heng, P.-A. (2008, January 1). Intrinsic Colorization. Proceedings of the ACM SIGGRAPH Asia 2008, New York, NY, USA.","DOI":"10.1145\/1457515.1409105"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"1119","DOI":"10.1007\/s11390-012-1290-4","article-title":"Affective Image Colorization","volume":"27","author":"Wang","year":"2012","journal-title":"J. Comput. Sci. Technol."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Cheng, Z., Yang, Q., and Sheng, B. (2015, January 7\u201313). Deep Colorization. Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile.","DOI":"10.1109\/ICCV.2015.55"},{"key":"ref_10","unstructured":"Leibe, B., Matas, J., Sebe, N., and Welling, M. (2016). Learning Representations for Automatic Colorization. Lecture Notes in Computer Science, Springer."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/2897824.2925974","article-title":"Let there be color!: Joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification","volume":"35","author":"Iizuka","year":"2016","journal-title":"ACM Trans. Graph."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"85","DOI":"10.1007\/978-3-319-94544-6_9","article-title":"Image Colorization Using Generative Adversarial Networks","volume":"Volume 10945","author":"Nazeri","year":"2018","journal-title":"International Conference on Articulated Motion and Deformable Objects"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A.A. (2017, January 21\u201326). Image-to-Image Translation with Conditional Adversarial Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Cao, Y., Zhou, Z., Zhang, W., and Yu, Y. (2017). Unsupervised Diverse Colorization via Generative Adversarial Networks. Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Springer.","DOI":"10.1007\/978-3-319-71249-9_10"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Vitoria, P., Raad, L., and Ballester, C. (2020, January 1\u20135). ChromaGAN: Adversarial Picture Colorization with Semantic Class Distribution. Proceedings of the 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, CO, USA.","DOI":"10.1109\/WACV45572.2020.9093389"},{"key":"ref_16","first-page":"1","article-title":"Real-time user-guided image colorization with learned deep priors","volume":"36","author":"Zhang","year":"2017","journal-title":"ACM Trans. Graph."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"818","DOI":"10.1007\/s11263-019-01271-4","article-title":"Pixelated Semantic Colorization","volume":"128","author":"Zhao","year":"2019","journal-title":"Int. J. Comput. Vis."},{"key":"ref_18","unstructured":"Antic, J. (2019, October 16). Jantic\/Deoldify: A Deep Learning Based Project for Colorizing and Restoring Old Images (and Video!). Available online: https:\/\/github.com\/jantic."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Su, J.-W., Chu, H.-K., and Huang, J.-B. (2020, January 13\u201319). Instance-Aware Image Colorization. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00799"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Wu, Y., Wang, X., Li, Y., Zhang, H., Zhao, X., and Shan, Y. (2021, January 10\u201317). Towards Vivid and Diverse Image Colorization with Generative Color Prior. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.01411"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Jin, X., Li, Z., Liu, K., Zou, D., Li, X., Zhu, X., Zhou, Z., Sun, Q., and Liu, Q. (2021, January 20\u201324). Focusing on Persons: Colorizing Old Images Learning from Modern Historical Movies. Proceedings of the 29th ACM International Conference on Multimedia, Virtual Event.","DOI":"10.1145\/3474085.3481544"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Zhang, R., Isola, P., and Efros, A.A. (2016). Colorful Image Colorization. European Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-319-46487-9_40"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Navab, N., Hornegger, J., Wells, W.M., and Frangi, A.F. (2015). U-Net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention 2015, Springer International Publishing.","DOI":"10.1007\/978-3-319-24553-9"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., and Ko, S.-J. (2021, January 10\u201317). Rethinking Coarse-to-Fine Approach in Single Image Deblurring. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00460"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"239","DOI":"10.1007\/978-3-030-01228-1_15","article-title":"Parallel Feature Pyramid Network for Object Detection","volume":"Volume 11209","author":"Ferrari","year":"2018","journal-title":"Computer Vision\u2013ECCV 2018"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/20\/8010\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:58:05Z","timestamp":1760144285000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/20\/8010"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,20]]},"references-count":26,"journal-issue":{"issue":"20","published-online":{"date-parts":[[2022,10]]}},"alternative-id":["s22208010"],"URL":"https:\/\/doi.org\/10.3390\/s22208010","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2022,10,20]]}}}