{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,22]],"date-time":"2026-02-22T06:02:39Z","timestamp":1771740159294,"version":"3.50.1"},"reference-count":64,"publisher":"MDPI AG","issue":"23","license":[{"start":{"date-parts":[[2022,12,3]],"date-time":"2022-12-03T00:00:00Z","timestamp":1670025600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Youth Innovation Promotion Association, CAS","award":["2022119"],"award-info":[{"award-number":["2022119"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Optical remote-sensing images have a wide range of applications, but they are often obscured by clouds, which affects subsequent analysis. Therefore, cloud removal becomes a necessary preprocessing step. In this paper, a novel and superior transformer-based network is proposed, named Cloudformer. The proposed method novelly combines the advantages of convolution and a self-attention mechanism: it uses convolution layers to extract simple features over a small range in the shallow layer, and exerts the advantage of a self-attention mechanism in extracting correlation in a large range in the deep layer. This method also introduces Locally-enhanced Positional Encoding (LePE) to flexibly generate suitable positional encodings for different inputs and to utilize local information to enhance encoding capabilities. Exhaustive experiments on public datasets demonstrate the superior ability of the method to remove both thin and thick clouds, and the effectiveness of the proposed modules is validated by ablation studies.<\/jats:p>","DOI":"10.3390\/rs14236132","type":"journal-article","created":{"date-parts":[[2022,12,5]],"date-time":"2022-12-05T05:31:32Z","timestamp":1670218292000},"page":"6132","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":29,"title":["Cloudformer: A Cloud-Removal Network Combining Self-Attention Mechanism and Convolution"],"prefix":"10.3390","volume":"14","author":[{"given":"Peiyang","family":"Wu","sequence":"first","affiliation":[{"name":"Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China"},{"name":"Key Laboratory of Technology in Geo-Spatial Information Processing and Application System, Chinese Academy of Sciences, Beijing 100190, China"},{"name":"School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5041-3300","authenticated-orcid":false,"given":"Zongxu","family":"Pan","sequence":"additional","affiliation":[{"name":"Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China"},{"name":"Key Laboratory of Technology in Geo-Spatial Information Processing and Application System, Chinese Academy of Sciences, Beijing 100190, China"},{"name":"School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2137-3693","authenticated-orcid":false,"given":"Hairong","family":"Tang","sequence":"additional","affiliation":[{"name":"Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China"},{"name":"Key Laboratory of Technology in Geo-Spatial Information Processing and Application System, Chinese Academy of Sciences, Beijing 100190, China"},{"name":"School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China"}]},{"given":"Yuxin","family":"Hu","sequence":"additional","affiliation":[{"name":"Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China"},{"name":"Key Laboratory of Technology in Geo-Spatial Information Processing and Application System, Chinese Academy of Sciences, Beijing 100190, China"},{"name":"School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,12,3]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Maggiori, E., Tarabalka, Y., Charpiat, G., and Alliez, P. (2017, January 23\u201328). Can semantic labeling methods generalize to any city? the inria aerial image labeling benchmark. Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Worth, TX, USA.","DOI":"10.1109\/IGARSS.2017.8127684"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1739","DOI":"10.1016\/j.rse.2009.04.014","article-title":"Monitoring forest changes in the southwestern United States using multitemporal Landsat data","volume":"113","author":"Vogelmann","year":"2009","journal-title":"Remote Sens. Environ."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"3826","DOI":"10.1109\/TGRS.2012.2227333","article-title":"Spatial and temporal distribution of clouds observed by MODIS onboard the Terra and Aqua satellites","volume":"51","author":"King","year":"2013","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_4","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16\u00d716 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., and Gao, W. (2021, January 19\u201325). Pre-trained image processing transformer. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.01212"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., and Yang, M.H. (2022, January 19\u201320). Restormer: Efficient transformer for high-resolution image restoration. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00564"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., and Zagoruyko, S. (2020, January 23\u201328). End-to-end object detection with transformers. Proceedings of the European Conference on Computer Vision, Glasgow, UK.","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"ref_8","unstructured":"Zhu, X., Su, W., Lu, L., Li, B., Wang, X., and Dai, J. (2020). Deformable detr: Deformable transformers for end-to-end object detection. arXiv."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Wang, Y., Xu, Z., Wang, X., Shen, C., Cheng, B., Shen, H., and Xia, H. (2021, January 19\u201325). End-to-end video instance segmentation with transformers. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00863"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Zheng, S., Lu, J., Zhao, H., Zhu, X., Luo, Z., Wang, Y., Fu, Y., Feng, J., Xiang, T., and Torr, P.H. (2021, January 19\u201325). Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00681"},{"key":"ref_11","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141., and Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Dong, X., Bao, J., Chen, D., Zhang, W., Yu, N., Yuan, L., Chen, D., and Guo, B. (2022, January 19\u201320). Cswin transformer: A general vision transformer backbone with cross-shaped windows. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01181"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"232","DOI":"10.1109\/TGRS.2012.2197682","article-title":"Cloud removal from multitemporal satellite images using information cloning","volume":"51","author":"Lin","year":"2012","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"54","DOI":"10.1016\/j.isprsjprs.2014.02.015","article-title":"Cloud removal for remotely sensed images by similar pixel replacement guided with a spatio-temporal MRF model","volume":"92","author":"Cheng","year":"2014","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"7086","DOI":"10.1109\/TGRS.2014.2307354","article-title":"Recovering quantitative remote sensing products contaminated by thick clouds and shadows using multitemporal dictionary learning","volume":"52","author":"Li","year":"2014","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"27","DOI":"10.1109\/TGRS.2016.2580576","article-title":"Spatially and temporally weighted regression: A novel method to produce continuous cloud-free Landsat imagery","volume":"55","author":"Chen","year":"2016","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"1090","DOI":"10.1109\/LGRS.2018.2829028","article-title":"Two-pass robust component analysis for cloud removal in satellite image sequence","volume":"15","author":"Wen","year":"2018","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_18","first-page":"1","article-title":"A unified framework of cloud detection and removal based on low-rank and group sparse regularizations for multitemporal multispectral images","volume":"60","author":"Ji","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"2973","DOI":"10.3390\/rs5062973","article-title":"Removal of optically thick clouds from multi-spectral satellite images using multi-frequency SAR data","volume":"5","author":"Eckardt","year":"2013","journal-title":"Remote Sens."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Zhu, C., Zhao, Z., Zhu, X., Nie, Z., and Liu, Q.H. (2016, January 6\u201310). Cloud removal for optical images using SAR structure data. Proceedings of the 2016 IEEE 13th International Conference on Signal Processing (ICSP), Chengdu, China.","DOI":"10.1109\/ICSP.2016.7878153"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"1870","DOI":"10.1109\/JSTARS.2017.2655101","article-title":"Removal of optically thick clouds from high-resolution satellite imagery using dictionary group learning and interdictionary nonlocal joint sparse coding","volume":"10","author":"Li","year":"2017","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"2865","DOI":"10.1109\/TGRS.2019.2956959","article-title":"Thick cloud removal with optical and SAR imagery via convolutional-mapping-deconvolutional network","volume":"58","author":"Li","year":"2019","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"569","DOI":"10.14358\/PERS.75.5.569","article-title":"Closest spectral fit for removing clouds and cloud shadows","volume":"75","author":"Meng","year":"2009","journal-title":"Photogramm. Eng. Remote Sens."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"459","DOI":"10.1016\/0034-4257(88)90019-3","article-title":"An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data","volume":"24","year":"1988","journal-title":"Remote Sens. Environ."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"173","DOI":"10.1016\/S0034-4257(02)00034-2","article-title":"An image transform to characterize and compensate for spatial variations in thin cloud contamination of Landsat images","volume":"82","author":"Zhang","year":"2002","journal-title":"Remote Sens. Environ."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"5331","DOI":"10.1080\/01431160903369600","article-title":"Haze removal based on advanced haze-optimized transformation (AHOT) for multispectral imagery","volume":"31","author":"He","year":"2010","journal-title":"Int. J. Remote Sens."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"210","DOI":"10.1109\/36.981363","article-title":"Haze detection and removal in high resolution satellite image with wavelet analysis","volume":"40","author":"Du","year":"2002","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Siravenha, A.C., Sousa, D., Bispo, A., and Pelaes, E. (2011, January 14\u201316). The use of high-pass filters and the inpainting method to clouds removal and their impact on satellite images classification. Proceedings of the International Conference on Image Analysis and Processing, Ravenna, Italy.","DOI":"10.1007\/978-3-642-24088-1_35"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"224","DOI":"10.1016\/j.isprsjprs.2014.06.011","article-title":"An effective thin cloud removal procedure for visible remote sensing images","volume":"96","author":"Shen","year":"2014","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_30","unstructured":"Xu, M., Jia, X., and Pickering, M. (2014, January 13\u201318). Automatic cloud removal for Landsat 8 OLI images using cirrus band. Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"1659","DOI":"10.1109\/TGRS.2015.2486780","article-title":"Thin cloud removal based on signal transmission principles and spectral mixture analysis","volume":"54","author":"Xu","year":"2015","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_32","first-page":"2341","article-title":"Single image haze removal using dark channel prior","volume":"33","author":"He","year":"2010","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"137","DOI":"10.1016\/j.isprsjprs.2019.05.003","article-title":"Thin cloud removal with residual symmetrical concatenation network","volume":"153","author":"Li","year":"2019","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Wang, X., Xu, G., Wang, Y., Lin, D., Li, P., and Lin, X. (August, January 28). Thin and thick cloud removal on remote sensing image by conditional generative adversarial network. Proceedings of the IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan.","DOI":"10.1109\/IGARSS.2019.8897958"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.Y., Zhou, T., and Efros, A.A. (2017, January 21\u201326). Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Enomoto, K., Sakurada, K., Wang, W., Fukui, H., Matsuoka, M., Nakamura, R., and Kawaguchi, N. (2017, January 21\u201326). Filmy cloud removal on satellite imagery with multispectral conditional generative adversarial nets. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA.","DOI":"10.1109\/CVPRW.2017.197"},{"key":"ref_37","unstructured":"Pan, H. (2020). Cloud removal for remote sensing imagery via spatial attention generative adversarial network. arXiv."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"112902","DOI":"10.1016\/j.rse.2022.112902","article-title":"Attention mechanism-based generative adversarial networks for cloud removal in Landsat images","volume":"271","author":"Xu","year":"2022","journal-title":"Remote Sens. Environ."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Zhu, J.Y., Park, T., Isola, P., and Efros, A.A. (2017, January 22\u201329). Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.244"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Singh, P., and Komodakis, N. (2018, January 22\u201327). Cloud-gan: Cloud removal for sentinel-2 imagery using a cyclic consistent generative adversarial networks. Proceedings of the IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain.","DOI":"10.1109\/IGARSS.2018.8519033"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"373","DOI":"10.1016\/j.isprsjprs.2020.06.021","article-title":"Thin cloud removal in optical remote sensing images based on generative adversarial networks and physical model of cloud distortion","volume":"166","author":"Li","year":"2020","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"8292612","DOI":"10.1155\/2021\/8292612","article-title":"SACTNet: Spatial Attention Context Transformation Network for Cloud Removal","volume":"2021","author":"Liu","year":"2021","journal-title":"Wirel. Commun. Mob. Comput."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"1125","DOI":"10.5194\/isprs-archives-XLIII-B2-2022-1125-2022","article-title":"Cloudtran: Cloud removal from multitemporal satellite images using axial transformer networks","volume":"43","author":"Christopoulos","year":"2022","journal-title":"Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Wu, H., Xiao, B., Codella, N., Liu, M., Dai, X., Yuan, L., and Zhang, L. (2021, January 10\u201317). Cvt: Introducing convolutions to vision transformers. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00009"},{"key":"ref_45","unstructured":"Li, Y., Zhang, K., Cao, J., Timofte, R., and Van Gool, L. (2021). Localvit: Bringing locality to vision transformers. arXiv."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Yuan, K., Guo, S., Liu, Z., Zhou, A., Yu, F., and Wu, W. (2021, January 10\u201317). Incorporating convolution designs into visual transformers. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00062"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Li, K., Wang, Y., Zhang, J., Gao, P., Song, G., Liu, Y., Li, H., and Qiao, Y. (2022). Uniformer: Unifying convolution and self-attention for visual recognition. arXiv.","DOI":"10.1109\/TPAMI.2023.3282631"},{"key":"ref_48","unstructured":"Huang, Z., Ben, Y., Luo, G., Cheng, P., Yu, G., and Fu, B. (2021). Shuffle transformer: Rethinking spatial shuffle for vision transformer. arXiv."},{"key":"ref_49","first-page":"9355","article-title":"Twins: Revisiting the design of spatial attention in vision transformers","volume":"34","author":"Chu","year":"2021","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 10\u201317). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Vaswani, A., Ramachandran, P., Srinivas, A., Parmar, N., Hechtman, B., and Shlens, J. (2021, January 19\u201325). Scaling local self-attention for parameter efficient visual backbones. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.01270"},{"key":"ref_52","unstructured":"Gehring, J., Auli, M., Grangier, D., Yarats, D., and Dauphin, Y.N. (2017, January 6\u201311). Convolutional sequence to sequence learning. Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Shaw, P., Uszkoreit, J., and Vaswani, A. (2018). Self-attention with relative position representations. arXiv.","DOI":"10.18653\/v1\/N18-2074"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., and Salakhutdinov, R. (2019). Transformer-xl: Attentive language models beyond a fixed-length context. arXiv.","DOI":"10.18653\/v1\/P19-1285"},{"key":"ref_55","first-page":"1","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","volume":"21","author":"Raffel","year":"2020","journal-title":"J. Mach. Learn. Res."},{"key":"ref_56","unstructured":"He, P., Liu, X., Gao, J., and Chen, W. (2020). Deberta: Decoding-enhanced bert with disentangled attention. arXiv."},{"key":"ref_57","unstructured":"Chu, X., Tian, Z., Zhang, B., Wang, X., Wei, X., Xia, H., and Shen, C. (2021). Conditional positional encodings for vision transformers. arXiv."},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., and Li, H. (2022, January 19\u201320). Uformer: A general u-shaped transformer for image restoration. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01716"},{"key":"ref_59","unstructured":"Charbonnier, P., Blanc-Feraud, L., Aubert, G., and Barlaud, M. (1994, January 13\u201316). Two deterministic half-quadratic regularization algorithms for computed imaging. Proceedings of the 1st International Conference on Image Processing, Austin, TX, USA."},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H., and Shao, L. (2020, January 23\u201328). Learning enriched features for real image restoration and enhancement. Proceedings of the European Conference on Computer Vision, Glasgow, UK.","DOI":"10.1007\/978-3-030-58595-2_30"},{"key":"ref_61","unstructured":"Lin, D., Xu, G., Wang, X., Wang, Y., Sun, X., and Fu, K. (2019). A remote sensing image dataset for cloud removal. arXiv."},{"key":"ref_62","unstructured":"Loshchilov, I., and Hutter, F. (2017). Decoupled weight decay regularization. arXiv."},{"key":"ref_63","doi-asserted-by":"crossref","unstructured":"Zhou, J., Luo, X., Rong, W., and Xu, H. (2022). Cloud Removal for Optical Remote Sensing Imagery Using Distortion Coding Network Combined with Compound Loss Functions. Remote Sens., 14.","DOI":"10.3390\/rs14143452"},{"key":"ref_64","doi-asserted-by":"crossref","first-page":"1881","DOI":"10.1080\/01431161.2022.2048915","article-title":"Cloud removal from satellite imagery using multispectral edge-filtered conditional generative adversarial networks","volume":"43","author":"Hasan","year":"2022","journal-title":"Int. J. Remote Sens."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/23\/6132\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:33:25Z","timestamp":1760146405000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/23\/6132"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,12,3]]},"references-count":64,"journal-issue":{"issue":"23","published-online":{"date-parts":[[2022,12]]}},"alternative-id":["rs14236132"],"URL":"https:\/\/doi.org\/10.3390\/rs14236132","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,12,3]]}}}