{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,16]],"date-time":"2026-04-16T23:28:44Z","timestamp":1776382124027,"version":"3.51.2"},"reference-count":48,"publisher":"MDPI AG","issue":"15","license":[{"start":{"date-parts":[[2022,8,4]],"date-time":"2022-08-04T00:00:00Z","timestamp":1659571200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62071384"],"award-info":[{"award-number":["62071384"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["2020ZDLGY04-09"],"award-info":[{"award-number":["2020ZDLGY04-09"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Key Research and Development Project of Shaanxi Province","award":["62071384"],"award-info":[{"award-number":["62071384"]}]},{"name":"Key Research and Development Project of Shaanxi Province","award":["2020ZDLGY04-09"],"award-info":[{"award-number":["2020ZDLGY04-09"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Optical images are rich in spectral information, but difficult to acquire under all-weather conditions, while SAR images can overcome adverse meteorological conditions, but geometric distortion and speckle noise will reduce the quality of SAR images and thus make image interpretation more challenging. Therefore, transforming SAR images to optical images to assist SAR image interpretation will bring opportunities for SAR image application. With the advancement of deep learning technology, the ability of SAR-to-optical transformation has been greatly improved. However, most of the current mainstream transformation methods do not consider the imaging characteristics of SAR images, and there will be failures such as noisy color spots and regional landform deformation in the generated optical images. Moreover, since the SAR image itself does not contain color information, there also exist many color errors in these results. Aiming at the above problems, Sar2color, an end-to-end general SAR-to-optical transformation model, is proposed based on a conditional generative adversarial network (CGAN). The model uses DCT residual block to reduce the effect of coherent speckle noise on the generated optical images, and constructs the Light atrous spatial pyramid pooling (Light-ASPP) module to mitigate the negative effect of geometric distortion on the generation of optical images. These two designs ensure the precision of texture details when the SAR image is transformed into an optical image, and use the correct color memory block (CCMB) to improve the color accuracy of transformation results. Towards the Sar2color model, we have carried out evaluations on the homologous heterogeneous SAR image and optical image pairing dataset SEN1-2. The experimental results show that, compared with other mainstream transformation models, Sar2color achieves the state-of-the-art effect on all three objective and one subjective evaluation metrics. Furthermore, we have carried out various ablation experiments, and the results show the effectiveness of each designed module of Sar2color.<\/jats:p>","DOI":"10.3390\/rs14153740","type":"journal-article","created":{"date-parts":[[2022,8,5]],"date-time":"2022-08-05T02:12:39Z","timestamp":1659665559000},"page":"3740","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":21,"title":["Sar2color: Learning Imaging Characteristics of SAR Images for SAR-to-Optical Transformation"],"prefix":"10.3390","volume":"14","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8024-1434","authenticated-orcid":false,"given":"Zhe","family":"Guo","sequence":"first","affiliation":[{"name":"School of Electronics and Information, Northwestern Polytechnical University, Xi\u2019an 710072, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7791-0449","authenticated-orcid":false,"given":"Haojie","family":"Guo","sequence":"additional","affiliation":[{"name":"School of Electronics and Information, Northwestern Polytechnical University, Xi\u2019an 710072, China"}]},{"given":"Xuewen","family":"Liu","sequence":"additional","affiliation":[{"name":"School of Electronics and Information, Northwestern Polytechnical University, Xi\u2019an 710072, China"}]},{"given":"Weijie","family":"Zhou","sequence":"additional","affiliation":[{"name":"School of Electronics and Information, Northwestern Polytechnical University, Xi\u2019an 710072, China"}]},{"given":"Yi","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Electronics and Information, Northwestern Polytechnical University, Xi\u2019an 710072, China"}]},{"given":"Yangyu","family":"Fan","sequence":"additional","affiliation":[{"name":"School of Electronics and Information, Northwestern Polytechnical University, Xi\u2019an 710072, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,8,4]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Scarpa, G., Gargiulo, M., Mazza, A., and Gaetano, R. (2018). A CNN-based fusion method for feature extraction from sentinel data. Remote Sens., 10.","DOI":"10.3390\/rs10020236"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Lyu, H., Lu, H., and Mou, L. (2016). Learning a transferable change rule from a recurrent neural network for land cover change detection. Remote Sens., 8.","DOI":"10.3390\/rs8060506"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"3369","DOI":"10.1080\/01431161003727671","article-title":"Building-damage detection using post-seismic high-resolution SAR satellite data","volume":"31","author":"Balz","year":"2010","journal-title":"Int. J. Remote Sens."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"465","DOI":"10.1016\/S0273-1177(97)00882-X","article-title":"Landslide characterisation in Canada using interferometric SAR and combined SAR and TM images","volume":"21","author":"Singhroy","year":"1998","journal-title":"Adv. Space Res."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"108021","DOI":"10.1016\/j.geomorph.2021.108021","article-title":"Exploring event landslide mapping using Sentinel-1 SAR backscatter products","volume":"397","author":"Santangelo","year":"2022","journal-title":"Geomorphology"},{"key":"ref_6","first-page":"1","article-title":"Balance scene learning mechanism for offshore and inshore ship detection in SAR images","volume":"19","author":"Zhang","year":"2020","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Gao, J., Yuan, Q., Li, J., Zhang, H., and Su, X. (2020). Cloud removal with fusion of high resolution optical and SAR images using generative adversarial networks. Remote Sens., 12.","DOI":"10.3390\/rs12010191"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"441","DOI":"10.1016\/j.rse.2014.06.025","article-title":"Simulating SAR geometric distortions and predicting Persistent Scatterer densities for ERS-1\/2 and ENVISAT C-band SAR and InSAR applications: Nationwide feasibility assessment to monitor the landmass of Great Britain with SAR imagery","volume":"152","author":"Cigna","year":"2014","journal-title":"Remote Sens. Environ."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Maity, A., Pattanaik, A., Sagnika, S., and Pani, S. (2015, January 12\u201313). A comparative study on approaches to speckle noise reduction in images. Proceedings of the 2015 International Conference on Computational Intelligence and Networks, Odisha, India.","DOI":"10.1109\/CINE.2015.36"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Zhang, Q., Liu, X., Liu, M., Zou, X., Zhu, L., and Ruan, X. (2021). Comparative analysis of edge information and polarization on sar-to-optical translation based on conditional generative adversarial networks. Remote Sens., 13.","DOI":"10.3390\/rs13010128"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Guo, J., He, C., Zhang, M., Li, Y., Gao, X., and Song, B. (2021). Edge-Preserving Convolutional Generative Adversarial Networks for SAR-to-Optical Image Translation. Remote Sens., 13.","DOI":"10.3390\/rs13183575"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Kong, Y., Hong, F., Leung, H., and Peng, X. (2021). A Fusion Method of Optical Image and SAR Image Based on Dense-UGAN and Gram\u2013Schmidt Transformation. Remote Sens., 13.","DOI":"10.3390\/rs13214274"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TGRS.2020.3034752","article-title":"Self-supervised sar-optical data fusion of sentinel-1\/-2 images","volume":"60","author":"Chen","year":"2021","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_14","unstructured":"Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial networks. arXiv."},{"key":"ref_15","unstructured":"Mirza, M., and Osindero, S. (2014). Conditional generative adversarial nets. arXiv."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.Y., Zhou, T., and Efros, A.A. (2017, January 21\u201326). Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Zhu, J.Y., Park, T., Isola, P., and Efros, A.A. (2017, January 22\u201329). Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.244"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., and Catanzaro, B. (2018, January 18\u201323). High-resolution image synthesis and semantic manipulation with conditional gans. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00917"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Cho, W., Choi, S., Park, D.K., Shin, I., and Choo, J. (2019, January 15\u201320). Image-to-image translation via group-wise deep whitening-and-coloring transformation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.01089"},{"key":"ref_20","unstructured":"Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., and Lee, H. (2016, January 19\u201324). Generative adversarial text to image synthesis. Proceedings of the International Conference on Machine Learning (PMLR), New York, NY, USA."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., and Metaxas, D.N. (2017, January 22\u201329). Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.629"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Bahng, H., Yoo, S., Cho, W., Park, D.K., Wu, Z., Ma, X., and Choo, J. (2018, January 8\u201314). Coloring with words: Guiding image colorization through text-based palette generation. Proceedings of the European Conference on Computer Vision, Munich, Germany.","DOI":"10.1007\/978-3-030-01258-8_27"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Yoo, S., Bahng, H., Chung, S., Lee, J., Chang, J., and Choo, J. (2019, January 15\u201320). Coloring with limited data: Few-shot colorization via memory augmented networks. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.01154"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., and Choo, J. (2018, January 18\u201323). Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00916"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Pumarola, A., Agudo, A., Martinez, A.M., Sanfeliu, A., and Moreno-Noguer, F. (2018, January 8\u201314). Ganimation: Anatomically-aware facial animation from a single image. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01249-6_50"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"1811","DOI":"10.1109\/JSTARS.2018.2803212","article-title":"Exploring the potential of conditional adversarial networks for optical and SAR image matching","volume":"11","author":"Merkle","year":"2018","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Doi, K., Sakurada, K., Onishi, M., and Iwasaki, A. (October, January 26). GAN-Based SAR-to-Optical Image Translation with Region Information. Proceedings of the IGARSS 2020\u20142020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA.","DOI":"10.1109\/IGARSS39084.2020.9323085"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Yu, T., Zhang, J., and Zhou, J. (2021, January 23\u201325). Conditional GAN with Effective Attention for SAR-to-Optical Image Translation. Proceedings of the 2021 3rd International Conference on Advances in Computer Technology, Information Science and Communication (CTISC), Shanghai, China.","DOI":"10.1109\/CTISC52352.2021.00009"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TGRS.2021.3131035","article-title":"Cloud removal in remote sensing images using generative adversarial networks and SAR-to-optical image translation","volume":"60","author":"Darbaghshahi","year":"2021","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Zuo, Z., and Li, Y. (2021, January 11\u201316). A SAR-to-Optical Image Translation Method Based on PIX2PIX. Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium.","DOI":"10.1109\/IGARSS47720.2021.9555111"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/LGRS.2020.3031199","article-title":"Atrous cgan for sar to optical image translation","volume":"19","author":"Turnes","year":"2020","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Tan, D., Liu, Y., Li, G., Yao, L., Sun, S., and He, Y. (2021). Serial GANs: A Feature-Preserving Heterogeneous Remote Sensing Image Transformation Model. Remote Sens., 13.","DOI":"10.3390\/rs13193968"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Schmitt, M., Hughes, L.H., and Zhu, X.X. (2018). The SEN1-2 dataset for deep learning in SAR-optical data fusion. arXiv.","DOI":"10.5194\/isprs-annals-IV-1-141-2018"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"800","DOI":"10.1049\/el:20080522","article-title":"Scope of validity of PSNR in image\/video quality assessment","volume":"44","author":"Ghanbari","year":"2008","journal-title":"Electron. Lett."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Hore, A., and Ziou, D. (2010, January 23\u201326). Image quality metrics: PSNR vs. SSIM. Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey.","DOI":"10.1109\/ICPR.2010.579"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"1247","DOI":"10.5194\/gmd-7-1247-2014","article-title":"Root mean square error (RMSE) or mean absolute error (MAE)?\u2013Arguments against avoiding RMSE in the literature","volume":"7","author":"Chai","year":"2014","journal-title":"Geosci. Model Dev."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Zhang, R., Isola, P., Efros, A.A., Shechtman, E., and Wang, O. (2018, January 18\u201323). The unreasonable effectiveness of deep features as a perceptual metric. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00068"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Rubel, O.S., Lukin, V.V., and De Medeiros, F.S. (2015, January 10\u201312). Prediction of Despeckling Efficiency of DCT-based filters Applied to SAR Images. Proceedings of the 2015 International Conference on Distributed Computing in Sensor Systems, Fortaleza, Brazil.","DOI":"10.1109\/DCOSS.2015.16"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Meenakshi, K., Swaraja, K., and Kora, P. (2019). A robust DCT-SVD based video watermarking using zigzag scanning. Soft Computing and Signal Processing, Springer.","DOI":"10.1007\/978-981-13-3600-3_45"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H. (2018, January 8\u201314). Encoder-decoder with atrous separable convolution for semantic image segmentation. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01234-2_49"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Guo, H., Guo, Z., Pan, Z., and Liu, X. (2021, January 1\u20133). Bilateral Res-Unet for Image Colorization with Limited Data via GANs. Proceedings of the 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), Washington, DC, USA.","DOI":"10.1109\/ICTAI52525.2021.00116"},{"key":"ref_44","first-page":"1097","article-title":"Imagenet classification with deep convolutional neural networks","volume":"25","author":"Krizhevsky","year":"2012","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_45","unstructured":"Peters, A.F., and Peters, P. (2015). The Color Thief, Albert Whitman and Company."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Guo, G., Wang, H., Bell, D., Bi, Y., and Greer, K. (2003). KNN model-based approach in classification. Proceedings of the OTM Confederated International Conferences On the Move to Meaningful Internet Systems, Springer.","DOI":"10.1007\/978-3-540-39964-3_62"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Li, Y., Chen, X., Wu, F., and Zha, Z.J. (2019, January 21\u201325). Linestofacephoto: Face photo generation from lines with conditional self-attention generative adversarial networks. Proceedings of the 27th ACM International Conference on Multimedia, Nice, France.","DOI":"10.1145\/3343031.3350854"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Xian, W., Sangkloy, P., Agrawal, V., Raj, A., Lu, J., Fang, C., Yu, F., and Hays, J. (2018, January 18\u201323). Texturegan: Controlling deep image synthesis with texture patches. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00882"}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/15\/3740\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:04:14Z","timestamp":1760141054000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/15\/3740"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,8,4]]},"references-count":48,"journal-issue":{"issue":"15","published-online":{"date-parts":[[2022,8]]}},"alternative-id":["rs14153740"],"URL":"https:\/\/doi.org\/10.3390\/rs14153740","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,8,4]]}}}