{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:28:56Z","timestamp":1760146136121,"version":"build-2065373602"},"reference-count":75,"publisher":"MDPI AG","issue":"20","license":[{"start":{"date-parts":[[2024,10,10]],"date-time":"2024-10-10T00:00:00Z","timestamp":1728518400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62103432","2022M721841","2021108"],"award-info":[{"award-number":["62103432","2022M721841","2021108"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100002858","name":"China Postdoctoral Science Foundation","doi-asserted-by":"publisher","award":["62103432","2022M721841","2021108"],"award-info":[{"award-number":["62103432","2022M721841","2021108"]}],"id":[{"id":"10.13039\/501100002858","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Young Talent Fund of the University Association for Science and Technology in Shannxi, China","award":["62103432","2022M721841","2021108"],"award-info":[{"award-number":["62103432","2022M721841","2021108"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Thermal infrared cameras can image stably in complex scenes such as night, rain, snow, and dense fog. Still, humans are more sensitive to visual colors, so there is an urgent need to convert infrared images into color images in areas such as assisted driving. This paper studies a colorization method for infrared images based on a generative adversarial model. The proposed dual-branch feature extraction network ensures the stability of the content and structure of the generated visible light image; the proposed discrimination strategy combining spatial and frequency domain hybrid constraints effectively improves the problem of undersaturated coloring and the loss of texture details in the edge area of the generated visible light image. The comparative experiment of the public infrared visible light paired data set shows that the algorithm proposed in this paper has achieved the best performance in maintaining the consistency of the content structure of the generated image, restoring the image color distribution, and restoring the image texture details.<\/jats:p>","DOI":"10.3390\/rs16203766","type":"journal-article","created":{"date-parts":[[2024,10,10]],"date-time":"2024-10-10T11:34:36Z","timestamp":1728560076000},"page":"3766","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["DBSF-Net: Infrared Image Colorization Based on the Generative Adversarial Model with Dual-Branch Feature Extraction and Spatial-Frequency-Domain Discrimination"],"prefix":"10.3390","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7560-9951","authenticated-orcid":false,"given":"Shaopeng","family":"Li","sequence":"first","affiliation":[{"name":"PLA Rocket Force University of Engineering, Xi\u2019an 710025, China"},{"name":"Department of Automation, Tsinghua University, Beijing 100084, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8545-6215","authenticated-orcid":false,"given":"Decao","family":"Ma","sequence":"additional","affiliation":[{"name":"PLA Rocket Force University of Engineering, Xi\u2019an 710025, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2040-2640","authenticated-orcid":false,"given":"Yao","family":"Ding","sequence":"additional","affiliation":[{"name":"PLA Rocket Force University of Engineering, Xi\u2019an 710025, China"}]},{"given":"Yong","family":"Xian","sequence":"additional","affiliation":[{"name":"PLA Rocket Force University of Engineering, Xi\u2019an 710025, China"}]},{"given":"Tao","family":"Zhang","sequence":"additional","affiliation":[{"name":"Department of Automation, Tsinghua University, Beijing 100084, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,10,10]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Zhao, G., Hu, Z., Feng, S., Wang, Z., and Wu, H. (2024). GLFuse: A Global and Local Four-Branch Feature Extraction Network for Infrared and Visible Image Fusion. Remote Sens., 16.","DOI":"10.3390\/rs16173246"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Gao, X., and Liu, S. (2024). BCMFIFuse: A Bilateral Cross-Modal Feature Interaction-Based Network for Infrared and Visible Image Fusion. Remote Sens., 16.","DOI":"10.3390\/rs16173136"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"St-Laurent, L., Maldague, X., and Pr\u00e9vost, D. (2007, January 9\u201312). Combination of colour and thermal sensors for enhanced object detection. Proceedings of the 2007 10th International Conference on Information Fusion, Quebec, QC, Canada.","DOI":"10.1109\/ICIF.2007.4408003"},{"key":"ref_4","unstructured":"Toga, A.W., and Mazziotta, J.C. (2000). 9\u2014The Human Visual System. Brain Mapping: The Systems, Academic Press."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"4745","DOI":"10.1109\/TCSVT.2023.3331499","article-title":"Nighttime Thermal Infrared Image Colorization with Feedback-Based Object Appearance Learning","volume":"34","author":"Luo","year":"2024","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1120","DOI":"10.1109\/TIP.2005.864231","article-title":"Fast image and video colorization using chrominance blending","volume":"15","author":"Yatziv","year":"2006","journal-title":"IEEE Trans. Image Process."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"1214","DOI":"10.1145\/1141911.1142017","article-title":"Manga colorization","volume":"25","author":"Qu","year":"2006","journal-title":"ACM Trans. Graph."},{"key":"ref_8","unstructured":"Luan, Q., Wen, F., Cohen-Or, D., Liang, L., Xu, Y.Q., and Shum, H.Y. (2007, January 25\u201327). Natural image colorization. Proceedings of the 18th Eurographics Conference on Rendering Techniques, Goslar, DEU, Goslar, Germany. EGSR\u201907."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/1409060.1409118","article-title":"AppProp: All-pairs appearance-space edit propagation","volume":"27","author":"An","year":"2008","journal-title":"ACM Trans. Graph."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"22","DOI":"10.1145\/1531326.1531328","article-title":"Edge-avoiding wavelets and their applications","volume":"28","author":"Fattal","year":"2009","journal-title":"ACM Trans. Graph."},{"key":"ref_11","first-page":"1","article-title":"Efficient affinity-based edit propagation using K-D tree","volume":"28","author":"Xu","year":"2009","journal-title":"ACM Trans. Graph."},{"key":"ref_12","unstructured":"Ironi, R., Cohen-Or, D., and Lischinski, D. (July, January 29). Colorization by example. Proceedings of the Eurographics Symposium on Rendering, Konstanz, Germany."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"152","DOI":"10.1145\/1409060.1409105","article-title":"Intrinsic colorization","volume":"27","author":"Liu","year":"2008","journal-title":"ACM Trans. Graph."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Morimoto, Y., Taguchi, Y., and Naemura, T. (2009, January 3\u20137). Automatic colorization of grayscale images using multiple images on the web. Proceedings of the SIGGRAPH 2009: Talks, New York, NY, USA. SIGGRAPH \u201909.","DOI":"10.1145\/1597990.1598049"},{"key":"ref_15","unstructured":"Gupta, R.K., Chia, A.Y.S., Rajan, D., Ng, E.S., and Zhiyong, H. (November, January 29). Image colorization using similar images. Proceedings of the 20th ACM International Conference on Multimedia, New York, NY, USA. MM \u201912."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"298","DOI":"10.1109\/TIP.2013.2288929","article-title":"Variational Exemplar-Based Image Colorization","volume":"23","author":"Bugeau","year":"2014","journal-title":"IEEE Trans. Image Process."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"4606","DOI":"10.1109\/TIP.2019.2912291","article-title":"Automatic Example-Based Image Colorization Using Location-Aware Cross-Scale Matching","volume":"28","author":"Li","year":"2019","journal-title":"IEEE Trans. Image Process."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"2931","DOI":"10.1109\/TVCG.2019.2908363","article-title":"A Superpixel-Based Variational Model for Image Colorization","volume":"26","author":"Fang","year":"2020","journal-title":"IEEE Trans. Vis. Comput. Graph."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"1241","DOI":"10.1109\/TPAMI.2012.47","article-title":"VCells: Simple and Efficient Superpixels Using Edge-Weighted Centroidal Voronoi Tessellations","volume":"34","author":"Wang","year":"2012","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Yang, S., Sun, M., Lou, X., Yang, H., and Liu, D. (2024). Nighttime Thermal Infrared Image Translation Integrating Visible Images. Remote Sens., 16.","DOI":"10.3390\/rs16040666"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Yang, S., Sun, M., Lou, X., Yang, H., and Zhou, H. (2023). An Unpaired Thermal Infrared Image Translation Method Using GMA-CycleGAN. Remote Sens., 15.","DOI":"10.3390\/rs15030663"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Tan, D., Liu, Y., Li, G., Yao, L., Sun, S., and He, Y. (2021). Serial GANs: A Feature-Preserving Heterogeneous Remote Sensing Image Transformation Model. Remote Sens., 13.","DOI":"10.3390\/rs13193968"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Tang, R., Liu, H., and Wei, J. (2020). Visualizing Near Infrared Hyperspectral Images with Generative Adversarial Networks. Remote Sens., 12.","DOI":"10.3390\/rs12233848"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Cheng, Z., Yang, Q., and Sheng, B. (2015, January 7\u201313). Deep Colorization. Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile. ICCV \u201915.","DOI":"10.1109\/ICCV.2015.55"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"110","DOI":"10.1145\/2897824.2925974","article-title":"Let there be color! Joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification","volume":"35","author":"Iizuka","year":"2016","journal-title":"ACM Trans. Graph."},{"key":"ref_26","unstructured":"Larsson, G., Maire, M., and Shakhnarovich, G. (2024, September 16). Learning Representations for Automatic Colorization. Available online: http:\/\/arxiv.org\/abs\/1603.06668."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Zhang, R., Isola, P., and Efros, A.A. (2016, January 11\u201314). Colorful Image Colorization. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46487-9_40"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Lee, G., Shin, S., Na, T., and Woo, S.S. (2024, January 1\u20136). Real-Time User-guided Adaptive Colorization with Vision Transformer. Proceedings of the 2024 IEEE\/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA.","DOI":"10.1109\/WACV57701.2024.00054"},{"key":"ref_29","unstructured":"Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014, January 8\u201313). Generative adversarial nets. Proceedings of the 27th International Conference on Neural Information Processing Systems\u2014Volume 2, Cambridge, MA, USA. NIPS\u201914."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.Y., Zhou, T., and Efros, A.A. (2017, January 21\u201326). Image-to-Image Translation with Conditional Adversarial Networks. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Zhu, J.Y., Park, T., Isola, P., and Efros, A.A. (2017, January 22\u201329). Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.244"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"47","DOI":"10.1145\/3197517.3201365","article-title":"Deep exemplar-based colorization","volume":"37","author":"He","year":"2018","journal-title":"ACM Trans. Graph."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Zhang, B., He, M., Liao, J., Sander, P.V., Yuan, L., Bermak, A., and Chen, D. (2019, January 15\u201320). Deep Exemplar-Based Video Colorization. Proceedings of the 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00824"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"625","DOI":"10.1007\/s13198-020-00960-5","article-title":"Implementation of image colorization with convolutional neural network","volume":"11","author":"Dabas","year":"2020","journal-title":"Int. J. Syst. Assur. Eng. Manag."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"129","DOI":"10.1016\/j.neucom.2021.04.014","article-title":"Pyramid convolutional network for colorization in monochrome-color multi-lens camera system","volume":"450","author":"Dong","year":"2021","journal-title":"Neurocomputing"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"109968","DOI":"10.1016\/j.patcog.2023.109968","article-title":"Structure-preserving feature alignment for old photo colorization","volume":"145","author":"Pang","year":"2024","journal-title":"Pattern Recogn."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Su\u00e1rez, P.L., Sappa, A.D., and Vintimilla, B.X. (2017, January 11\u201315). Colorizing Infrared Images Through a Triplet Conditional DCGAN Architecture. Proceedings of the International Conference on Image Analysis and Processing, Catania, Italy.","DOI":"10.1007\/978-3-319-68560-1_26"},{"key":"ref_38","unstructured":"Benaim, S., and Wolf, L. (2017, January 4\u20139). One-sided unsupervised domain mapping. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA. NIPS\u201917."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Bansal, A., Ma, S., Ramanan, D., and Sheikh, Y. (2018, January 8\u201314). Recycle-GAN: Unsupervised Video Retargeting. Proceedings of the Computer Vision\u2014ECCV 2018: 15th European Conference, Munich, Germany.","DOI":"10.1007\/978-3-030-01228-1_8"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Kniaz, V.V., Knyaz, V.A., Hlad\u016fvka, J., Kropatsch, W.G., and Mizginov, V. (2018, January 8\u201314). ThermalGAN: Multimodal Color-to-Thermal Image Translation for Person Re-identification in Multispectral Dataset. Proceedings of the ECCV Workshops, Munich, Germany.","DOI":"10.1007\/978-3-030-11024-6_46"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Mehri, A., and Sappa, A.D. (2019, January 16\u201317). Colorizing Near Infrared Images through a Cyclic Adversarial Approach of Unpaired Samples. Proceedings of the 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA.","DOI":"10.1109\/CVPRW.2019.00128"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Abbott, R., Robertson, N.M., del Rinc\u00f3n, J.M., and Connor, B. (2020, January 14\u201319). Unsupervised object detection via LWIR\/RGB translation. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA.","DOI":"10.1109\/CVPRW50498.2020.00053"},{"key":"ref_43","unstructured":"Emami, H., Aliabadi, M.M., Dong, M., and Chinnam, R.B. (2024, September 17). SPA-GAN: Spatial Attention GAN for Image-to-Image Translation. [arXiv:cs.CV\/1908.06616]. Available online: http:\/\/arxiv.org\/abs\/1908.06616."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Chen, R., Huang, W., Huang, B., Sun, F., and Fang, B. (2020, January 13\u201319). Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00819"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Park, T., Efros, A.A., Zhang, R., and Zhu, J.Y. (2020, January 23\u201328). Contrastive Learning for Unpaired Image-to-Image Translation. Proceedings of the European Conference on Computer Vision, Glasgow, UK.","DOI":"10.1007\/978-3-030-58545-7_19"},{"key":"ref_46","unstructured":"Han, J., Shoeiby, M., Petersson, L., and Armin, M.A. (2024, September 17). Dual Contrastive Learning for Unsupervised Image-to-Image Translation. [arXiv:cs.CV\/2104.07689]. Available online: http:\/\/arxiv.org\/abs\/2104.07689."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"26465","DOI":"10.1007\/s11042-021-10881-5","article-title":"A fully-automatic image colorization scheme using improved CycleGAN with skip connections","volume":"80","author":"Huang","year":"2021","journal-title":"Multimed. Tools Appl."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Li, S., Han, B., Yu, Z., Liu, C.H., Chen, K., and Wang, S. (2021, January 17). I2V-GAN: Unpaired Infrared-to-Visible Video Translation. Proceedings of the 29th ACM International Conference on Multimedia, New York, NY, USA. MM \u201921.","DOI":"10.1145\/3474085.3475445"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TIM.2022.3166202","article-title":"MobileAR-GAN: MobileNet-Based Efficient Attentive Recurrent Generative Adversarial Network for Infrared-to-Visual Transformations","volume":"71","author":"Yadav","year":"2022","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"15808","DOI":"10.1109\/TITS.2022.3145476","article-title":"Thermal Infrared Image Colorization for Nighttime Driving Scenes With Top-Down Guided Attention","volume":"23","author":"Luo","year":"2021","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_51","unstructured":"Yu, Z., Chen, K., Li, S., Han, B., Liu, C.H., and Wang, S. (2024, September 17). ROMA: Cross-Domain Region Similarity Matching for Unpaired Nighttime Infrared to Daytime Visible Video Translation. [arXiv:cs.CV\/2204.12367]. Available online: http:\/\/arxiv.org\/abs\/2204.12367."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Guo, J., Li, J., Fu, H., Gong, M., Zhang, K., and Tao, D. (2022, January 18\u201324). Alleviating Semantics Distortion in Unsupervised Low-Level Image-to-Image Translation via Structure Consistency Constraint. Proceedings of the 2022 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01771"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Lin, Y., Zhang, S., Chen, T., Lu, Y., Li, G., and Shi, Y. (2022, January 10). Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image Translation. Proceedings of the 30th ACM International Conference on Multimedia, New York, NY, USA. MM \u201922.","DOI":"10.1145\/3503161.3547802"},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"1240","DOI":"10.1109\/JBHI.2023.3263434","article-title":"QEMCGAN: Quantized Evolutionary Gradient Aware Multiobjective Cyclic GAN for Medical Image Translation","volume":"28","author":"Bharti","year":"2024","journal-title":"IEEE J. Biomed. Health Inform."},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"Zhao, M., Feng, G., Tan, J., Zhang, N., and Lu, X. (2022, January 26\u201328). CSTGAN: Cycle Swin Transformer GAN for Unpaired Infrared Image Colorization. Proceedings of the 2022 3rd International Conference on Control, Robotics and Intelligent System, New York, NY, USA. CCRIS \u201922.","DOI":"10.1145\/3562007.3562053"},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Feng, L., Geng, G., Li, Q., Jiang, Y.H., Li, Z., and Li, K. (2023). CRPGAN: Learning image-to-image translation of two unpaired images by cross-attention mechanism and parallelization strategy. PLoS ONE, 18.","DOI":"10.1371\/journal.pone.0280073"},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"4111","DOI":"10.1007\/s40747-022-00924-1","article-title":"Multi-feature contrastive learning for unpaired image-to-image translation","volume":"9","author":"Gou","year":"2022","journal-title":"Complex Intell. Syst."},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"375","DOI":"10.1007\/s41095-023-0342-8","article-title":"Temporally consistent video colorization with deep feature propagation and self-regularization learning","volume":"10","author":"Liu","year":"2021","journal-title":"Comput. Vis. Media"},{"key":"ref_59","unstructured":"Liang, Z., Li, Z., Zhou, S., Li, C., and Loy, C.C. (2024). Control Color: Multimodal Diffusion-based Interactive Image Colorization. arXiv."},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"127449","DOI":"10.1016\/j.neucom.2024.127449","article-title":"Infrared colorization with cross-modality zero-shot learning","volume":"579","author":"Wei","year":"2024","journal-title":"Neurocomputing"},{"key":"ref_61","unstructured":"Kumar, M., Weissenborn, D., and Kalchbrenner, N. (2021). Colorization Transformer. arXiv."},{"key":"ref_62","doi-asserted-by":"crossref","unstructured":"Kim, S., Baek, J., Park, J., Kim, G., and Kim, S. (2022, January 18\u201324). InstaFormer: Instance-Aware Image-to-Image Translation with Transformer. Proceedings of the 2022 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01778"},{"key":"ref_63","doi-asserted-by":"crossref","unstructured":"Ji, X., Jiang, B., Luo, D., Tao, G., Chu, W., Xie, Z., Wang, C., and Tai, Y. (2022). ColorFormer: Image Colorization via Color Memory Assisted Hybrid-Attention Transformer, Springer.","DOI":"10.1007\/978-3-031-19787-1_2"},{"key":"ref_64","unstructured":"Zheng, W., Li, Q., Zhang, G., Wan, P., and Wang, Z. (2024, September 17). ITTR: Unpaired Image-to-Image Translation with Transformers. [arXiv:cs.CV\/2203.16015]. Available online: http:\/\/arxiv.org\/abs\/2203.16015."},{"key":"ref_65","doi-asserted-by":"crossref","unstructured":"Torbunov, D., Huang, Y., Yu, H., zhi Huang, J., Yoo, S., Lin, M., Viren, B., and Ren, Y. (2023, January 2\u20137). UVCGAN: UNet Vision Transformer cycle-consistent GAN for unpaired image-to-image translation. Proceedings of the 2023 IEEE\/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA.","DOI":"10.1109\/WACV56688.2023.00077"},{"key":"ref_66","doi-asserted-by":"crossref","unstructured":"Ma, T., Li, B., Liu, W., Hua, M., Dong, J., and Tan, T. (2023). CFFT-GAN: Cross-domain Feature Fusion Transformer for Exemplar-based Image Translation. arXiv.","DOI":"10.1609\/aaai.v37i2.25279"},{"key":"ref_67","unstructured":"Jiang, C., Gao, F., Ma, B., Lin, Y., Wang, N., and Xu, G. (2024, September 17). Masked and Adaptive Transformer for Exemplar Based Image Translation. [arXiv:cs.CV\/2303.17123]. Available online: http:\/\/arxiv.org\/abs\/2303.17123."},{"key":"ref_68","doi-asserted-by":"crossref","first-page":"111240","DOI":"10.1016\/j.knosys.2023.111240","article-title":"Exemplar-based Video Colorization with Long-term Spatiotemporal Dependency","volume":"284","author":"Chen","year":"2023","journal-title":"Knowl. Based Syst."},{"key":"ref_69","unstructured":"Wu, Z., Liu, Z., Lin, J., Lin, Y., and Han, S. (2020, January 30). Lite Transformer with Long-Short Range Attention. Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia."},{"key":"ref_70","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201323). Squeeze-and-Excitation Networks. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_71","doi-asserted-by":"crossref","unstructured":"Hwang, S., Park, J., Kim, N., Choi, Y., and Kweon, I.S. (2015, January 7\u201312). Multispectral pedestrian detection: Benchmark dataset and baseline. Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298706"},{"key":"ref_72","doi-asserted-by":"crossref","unstructured":"Zhang, R., Isola, P., Efros, A.A., Shechtman, E., and Wang, O. (2018, January 18\u201323). The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00068"},{"key":"ref_73","doi-asserted-by":"crossref","first-page":"2117","DOI":"10.1109\/TIP.2005.859389","article-title":"An information fidelity criterion for image quality assessment using natural scene statistics","volume":"14","author":"Sheikh","year":"2005","journal-title":"IEEE Trans. Image Process."},{"key":"ref_74","doi-asserted-by":"crossref","unstructured":"Chen, Y., Pan, Y., Yao, T., Tian, X., and Mei, T. (2019, January 21\u201325). Mocycle-GAN: Unpaired Video-to-Video Translation. Proceedings of the 27th ACM International Conference on Multimedia, Nice, France.","DOI":"10.1145\/3343031.3350937"},{"key":"ref_75","unstructured":"Anoosheh, A., Sattler, T., Timofte, R., Pollefeys, M., and Gool, L.V. (2024, September 10). Night-to-Day Image Translation for Retrieval-Based Localization. [arXiv:cs.CV\/1809.09767]. Available online: http:\/\/arxiv.org\/abs\/1809.09767."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/20\/3766\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T16:10:49Z","timestamp":1760112649000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/20\/3766"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,10]]},"references-count":75,"journal-issue":{"issue":"20","published-online":{"date-parts":[[2024,10]]}},"alternative-id":["rs16203766"],"URL":"https:\/\/doi.org\/10.3390\/rs16203766","relation":{},"ISSN":["2072-4292"],"issn-type":[{"type":"electronic","value":"2072-4292"}],"subject":[],"published":{"date-parts":[[2024,10,10]]}}}