{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,17]],"date-time":"2026-01-17T02:38:49Z","timestamp":1768617529535,"version":"3.49.0"},"reference-count":47,"publisher":"MDPI AG","issue":"21","license":[{"start":{"date-parts":[[2023,10,26]],"date-time":"2023-10-26T00:00:00Z","timestamp":1698278400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Key R&amp;D Program of China","award":["2017YFA0604300"],"award-info":[{"award-number":["2017YFA0604300"]}]},{"name":"National Key R&amp;D Program of China","award":["2017YFA0604302"],"award-info":[{"award-number":["2017YFA0604302"]}]},{"name":"National Key R&amp;D Program of China","award":["U1811464"],"award-info":[{"award-number":["U1811464"]}]},{"name":"National Key R&amp;D Program of China","award":["41875122"],"award-info":[{"award-number":["41875122"]}]},{"name":"National Key R&amp;D Program of China","award":["2017TQ04Z359"],"award-info":[{"award-number":["2017TQ04Z359"]}]},{"name":"National Key R&amp;D Program of China","award":["2018XBYJRC004"],"award-info":[{"award-number":["2018XBYJRC004"]}]},{"name":"Natural Science Foundation of China","award":["2017YFA0604300"],"award-info":[{"award-number":["2017YFA0604300"]}]},{"name":"Natural Science Foundation of China","award":["2017YFA0604302"],"award-info":[{"award-number":["2017YFA0604302"]}]},{"name":"Natural Science Foundation of China","award":["U1811464"],"award-info":[{"award-number":["U1811464"]}]},{"name":"Natural Science Foundation of China","award":["41875122"],"award-info":[{"award-number":["41875122"]}]},{"name":"Natural Science Foundation of China","award":["2017TQ04Z359"],"award-info":[{"award-number":["2017TQ04Z359"]}]},{"name":"Natural Science Foundation of China","award":["2018XBYJRC004"],"award-info":[{"award-number":["2018XBYJRC004"]}]},{"name":"Guangdong Top Young Talents","award":["2017YFA0604300"],"award-info":[{"award-number":["2017YFA0604300"]}]},{"name":"Guangdong Top Young Talents","award":["2017YFA0604302"],"award-info":[{"award-number":["2017YFA0604302"]}]},{"name":"Guangdong Top Young Talents","award":["U1811464"],"award-info":[{"award-number":["U1811464"]}]},{"name":"Guangdong Top Young Talents","award":["41875122"],"award-info":[{"award-number":["41875122"]}]},{"name":"Guangdong Top Young Talents","award":["2017TQ04Z359"],"award-info":[{"award-number":["2017TQ04Z359"]}]},{"name":"Guangdong Top Young Talents","award":["2018XBYJRC004"],"award-info":[{"award-number":["2018XBYJRC004"]}]},{"name":"Western Talents","award":["2017YFA0604300"],"award-info":[{"award-number":["2017YFA0604300"]}]},{"name":"Western Talents","award":["2017YFA0604302"],"award-info":[{"award-number":["2017YFA0604302"]}]},{"name":"Western Talents","award":["U1811464"],"award-info":[{"award-number":["U1811464"]}]},{"name":"Western Talents","award":["41875122"],"award-info":[{"award-number":["41875122"]}]},{"name":"Western Talents","award":["2017TQ04Z359"],"award-info":[{"award-number":["2017TQ04Z359"]}]},{"name":"Western Talents","award":["2018XBYJRC004"],"award-info":[{"award-number":["2018XBYJRC004"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>The monitoring of rapidly changing land surface processes requires remote sensing images with high spatiotemporal resolution. As remote sensing satellites have different satellite orbits, satellite orbital velocities, and sensors, it is challenging to acquire remote sensing images with high resolution and dense time series within a reasonable temporal interval. Remote sensing spatiotemporal fusion is one of the effective ways to acquire high-resolution images with long time series. Most of the existing STF methods use artificially specified fusion strategies, resulting in blurry images and poor generalization ability. Additionally, some methods lack continuous time change information, leading to poor performance in capturing sharp changes in land covers. In this paper, we propose an adaptive multiscale network for spatiotemporal fusion (AMS-STF) based on a generative adversarial network (GAN). AMS-STF reconstructs high-resolution images by leveraging the temporal and spatial features of the input data through multiple adaptive modules and multiscale features. In AMS-STF, for the first time, deformable convolution is used for the STF task to solve the shape adaptation problem, allowing for adaptive adjustment of the convolution kernel based on the different shapes and types of land use. Additionally, an adaptive attention module is introduced in the networks to enhance the ability to perceive temporal changes. We conducted experiments comparing AMS-STF to the most widely used and innovative models currently available on three Landsat-MODIS datasets, as well as ablation experiments to evaluate some innovative modules. The results demonstrate that the adaptive modules significantly improve the fusion effect of land covers and enhance the clarity of their boundaries, which proves the effectiveness of AMS-STF.<\/jats:p>","DOI":"10.3390\/rs15215128","type":"journal-article","created":{"date-parts":[[2023,10,27]],"date-time":"2023-10-27T09:56:36Z","timestamp":1698400596000},"page":"5128","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["An Adaptive Multiscale Generative Adversarial Network for the Spatiotemporal Fusion of Landsat and MODIS Data"],"prefix":"10.3390","volume":"15","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1178-7825","authenticated-orcid":false,"given":"Xiaoyu","family":"Pan","sequence":"first","affiliation":[{"name":"School of Geography and Planning, Sun Yat-sen University, Guangzhou 510275, China"}]},{"given":"Muyuan","family":"Deng","sequence":"additional","affiliation":[{"name":"School of Information and Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510631, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0444-8139","authenticated-orcid":false,"given":"Zurui","family":"Ao","sequence":"additional","affiliation":[{"name":"Beidou Research Institute, South China Normal University, Guangzhou 510631, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1146-4874","authenticated-orcid":false,"given":"Qinchuan","family":"Xin","sequence":"additional","affiliation":[{"name":"School of Geography and Planning, Sun Yat-sen University, Guangzhou 510275, China"},{"name":"State Key Laboratory of Desert and Oasis Ecology, Research Center for Ecology and Environment of Central Asia, Chinese Academy of Sciences, Urumqi 830011, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,10,26]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"3237","DOI":"10.1080\/01431161.2014.903351","article-title":"Blending MODIS and Landsat Images for Urban Flood Mapping","volume":"35","author":"Zhang","year":"2014","journal-title":"Int. J. Remote Sens."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Miah, M.T., and Sultana, M. (2022, January 17\u201318). Environmental Impact of Land Use and Land Cover Change in Rampal, Bangladesh: A Google Earth Engine-Based Remote Sensing Approach. Proceedings of the 2022 4th International Conference on Sustainable Technologies for Industry 4.0 (STI), Dhaka, Bangladesh.","DOI":"10.1109\/STI56238.2022.10103236"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"74","DOI":"10.1016\/j.agrformet.2015.11.003","article-title":"Crop Yield Forecasting on the Canadian Prairies by Remotely Sensed Vegetation Indices and Machine Learning Methods","volume":"218\u2013219","author":"Johnson","year":"2016","journal-title":"Agric. For. Meteorol."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"e6469439","DOI":"10.1155\/2017\/6469439","article-title":"A Novel Technique to Compute the Revisit Time of Satellites and Its Application in Remote Sensing Satellite Optimization Design","volume":"2017","author":"Luo","year":"2017","journal-title":"Int. J. Aerosp. Eng."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1707","DOI":"10.1016\/j.scitotenv.2018.09.308","article-title":"Using Spatio-Temporal Fusion of Landsat-8 and MODIS Data to Derive Phenology, Biomass and Yield Estimates for Corn and Soybean","volume":"650","author":"Liao","year":"2019","journal-title":"Sci. Total Environ."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"140301","DOI":"10.1007\/s11432-019-2785-y","article-title":"Spatio-Temporal Fusion for Remote Sensing Data: An Overview and New Benchmark","volume":"63","author":"Li","year":"2020","journal-title":"Sci. China Inf. Sci."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Zhu, X., Cai, F., Tian, J., and Williams, T.K.-A. (2018). Spatiotemporal Fusion of Multisource Remote Sensing Data: Literature Survey, Taxonomy, Principles, Applications, and Future Directions. Remote Sens., 10.","DOI":"10.3390\/rs10040527"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"2207","DOI":"10.1109\/TGRS.2006.872081","article-title":"On the Blending of the Landsat and MODIS Surface Reflectance: Predicting Daily Landsat Surface Reflectance","volume":"44","author":"Gao","year":"2006","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1613","DOI":"10.1016\/j.rse.2009.03.007","article-title":"A New Data Fusion Model for High Spatial- and Temporal-Resolution Mapping of Forest Disturbance Based on Landsat and MODIS","volume":"113","author":"Hilker","year":"2009","journal-title":"Remote Sens. Environ."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"2610","DOI":"10.1016\/j.rse.2010.05.032","article-title":"An Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model for Complex Heterogeneous Regions","volume":"114","author":"Zhu","year":"2010","journal-title":"Remote Sens. Environ."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"52","DOI":"10.1016\/j.rse.2013.03.021","article-title":"Blending Multi-Resolution Satellite Sea Surface Temperature (SST) Products Using Bayesian Maximum Entropy Method","volume":"135","author":"Li","year":"2013","journal-title":"Remote Sens. Environ."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"561","DOI":"10.1080\/2150704X.2013.769283","article-title":"Unified Fusion of Remote-Sensing Imagery: Generating Simultaneously High-Resolution Synthetic Spatial\u2013Temporal\u2013Spectral Earth Observations","volume":"4","author":"Huang","year":"2013","journal-title":"Remote Sens. Lett."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"1212","DOI":"10.1109\/36.763276","article-title":"Unmixing-Based Multisensor Multiresolution Image Fusion","volume":"37","author":"Zhukov","year":"1999","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"063507","DOI":"10.1117\/1.JRS.6.063507","article-title":"Use of MODIS and Landsat Time Series Data to Generate High-Resolution Temporal Synthetic Landsat Data Using a Spatial and Temporal Reflectance Fusion Model","volume":"6","author":"Wu","year":"2012","journal-title":"J. Appl. Remote Sens."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"5346","DOI":"10.3390\/rs5105346","article-title":"An Enhanced Spatial and Temporal Data Fusion Model for Fusing Landsat and MODIS Surface Reflectance to Generate High Temporal Landsat-Like Data","volume":"5","author":"Zhang","year":"2013","journal-title":"Remote Sens."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"31","DOI":"10.1016\/j.rse.2017.10.046","article-title":"Spatio-Temporal Fusion for Daily Sentinel-2 Images","volume":"204","author":"Wang","year":"2018","journal-title":"Remote Sens. Environ."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"165","DOI":"10.1016\/j.rse.2015.11.016","article-title":"A Flexible Spatiotemporal Method for Fusing Satellite Images with Different Resolutions","volume":"172","author":"Zhu","year":"2016","journal-title":"Remote Sens. Environ."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"74","DOI":"10.1016\/j.rse.2019.03.012","article-title":"An Improved Flexible Spatiotemporal DAta Fusion (IFSDAF) Method for Producing High Spatiotemporal Resolution Normalized Difference Vegetation Index Time Series","volume":"227","author":"Liu","year":"2019","journal-title":"Remote Sens. Environ."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"111537","DOI":"10.1016\/j.rse.2019.111537","article-title":"SFSDAF: An Enhanced FSDAF That Incorporates Sub-Pixel Class Fraction Change Information for Spatio-Temporal Image Fusion","volume":"237","author":"Li","year":"2020","journal-title":"Remote Sens. Environ."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"34","DOI":"10.1016\/j.rse.2014.09.012","article-title":"A Comparison of STARFM and an Unmixing-Based Algorithm for Landsat and MODIS Data Fusion","volume":"156","author":"Gevaert","year":"2015","journal-title":"Remote Sens. Environ."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"3707","DOI":"10.1109\/TGRS.2012.2186638","article-title":"Spatiotemporal Reflectance Fusion via Sparse Representation","volume":"50","author":"Huang","year":"2012","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"821","DOI":"10.1109\/JSTARS.2018.2797894","article-title":"Spatiotemporal Satellite Image Fusion Using Deep Convolutional Neural Networks","volume":"11","author":"Song","year":"2018","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"6552","DOI":"10.1109\/TGRS.2019.2907310","article-title":"StfNet: A Two-Stream Convolutional Neural Network for Spatiotemporal Image Fusion","volume":"57","author":"Liu","year":"2019","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Tan, Z., Yue, P., Di, L., and Tang, J. (2018). Deriving High Spatiotemporal Remote Sensing Images Using Deep Convolutional Network. Remote Sens., 10.","DOI":"10.3390\/rs10071066"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Tan, Z., Di, L., Zhang, M., Guo, L., and Gao, M. (2019). An Enhanced Deep Convolutional Model for Spatiotemporal Image Fusion. Remote Sens., 11.","DOI":"10.3390\/rs11242898"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"140302","DOI":"10.1007\/s11432-019-2805-y","article-title":"A New Sensor Bias-Driven Spatio-Temporal Fusion Model Based on Convolutional Neural Networks","volume":"63","author":"Li","year":"2020","journal-title":"Sci. China Inf. Sci."},{"key":"ref_27","unstructured":"Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014, January 8\u201313). Generative Adversarial Networks. Proceedings of the Advances in Neural Information Processing Systems 27 (NIPS 2014), Montreal, QC, Canada."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"4273","DOI":"10.1109\/TGRS.2020.3010530","article-title":"Remote Sensing Image Spatiotemporal Fusion Using a Generative Adversarial Network","volume":"59","author":"Zhang","year":"2021","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"5851","DOI":"10.1109\/TGRS.2020.3023432","article-title":"CycleGAN-STF: Spatiotemporal Fusion via CycleGAN-Based Image Generation","volume":"59","author":"Chen","year":"2021","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"5601413","DOI":"10.1109\/TGRS.2021.3050551","article-title":"A Flexible Reference-Insensitive Spatiotemporal Fusion Model for Remote Sensing Images Using Conditional Generative Adversarial Network","volume":"60","author":"Tan","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"4410816","DOI":"10.1109\/TGRS.2022.3169916","article-title":"MLFF-GAN: A Multilevel Feature Fusion with GAN for Spatiotemporal Remote Sensing Images","volume":"60","author":"Song","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"5407217","DOI":"10.1109\/TGRS.2022.3145086","article-title":"A Robust Model for MODIS and Landsat Image Fusion Considering Input Noise","volume":"60","author":"Tan","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Cao, H., Luo, X., Peng, Y., and Xie, T. (2022). MANet: A Network Architecture for Remote Sensing Spatiotemporal Fusion Based on Multiscale and Attention Mechanisms. Remote Sens., 14.","DOI":"10.3390\/rs14184600"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"5400915","DOI":"10.1109\/TGRS.2021.3065418","article-title":"Spatiotemporal Reflectance Fusion Using a Generative Adversarial Network","volume":"60","author":"Shang","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"5528117","DOI":"10.1109\/TGRS.2022.3171331","article-title":"Remote Sensing Image Spatiotemporal Fusion via a Generative Adversarial Network With One Prior Image Pair","volume":"60","author":"Song","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"2005716","DOI":"10.1109\/TGRS.2022.3177749","article-title":"HCNNet: A Hybrid Convolutional Neural Network for Spatiotemporal Image Fusion","volume":"60","author":"Zhu","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep Residual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Liu, Z., Wang, L., Wu, W., Qian, C., and Lu, T. (2021, January 10\u201317). TAM: Temporal Adaptive Module for Video Recognition. Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.01345"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., and Wei, Y. (2017, January 22\u201329). Deformable Convolutional Networks. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.89"},{"key":"ref_40","unstructured":"Mirza, M., and Osindero, S. (2014). Conditional Generative Adversarial Nets. arXiv."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A.A. (2017, January 21\u201326). Image-to-Image Translation with Conditional Adversarial Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). U-Net: Convolutional Networks for Biomedical Image Segmentation. Proceedings of the MICCAI 2015: 18th International Conference, Munich, Germany.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Kong, X., Liu, X., Gu, J., Qiao, Y., and Dong, C. (2022, January 18\u201324). Reflash Dropout in Image Super-Resolution. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00591"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Zhu, X., Hu, H., Lin, S., and Dai, J. (2019, January 15\u201320). Deformable ConvNets v2: More Deformable, Better Results. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00953"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"193","DOI":"10.1016\/j.rse.2013.02.007","article-title":"Assessing the Accuracy of Blending Landsat\u2013MODIS Surface Reflectances in Two Landscapes with Contrasting Spatial and Temporal Dynamics: A Framework for Algorithm Selection","volume":"133","author":"Emelyanova","year":"2013","journal-title":"Remote Sens. Environ."},{"key":"ref_46","unstructured":"Wald, L. (2002). Data Fusion. Definitions and Architectures\u2014Fusion of Images of Different Spatial Resolutions, Presses de l\u2019Ecole, Ecole des Mines de Paris."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image Quality Assessment: From Error Visibility to Structural Similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/21\/5128\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T21:12:30Z","timestamp":1760130750000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/21\/5128"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,26]]},"references-count":47,"journal-issue":{"issue":"21","published-online":{"date-parts":[[2023,11]]}},"alternative-id":["rs15215128"],"URL":"https:\/\/doi.org\/10.3390\/rs15215128","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,10,26]]}}}