{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,28]],"date-time":"2026-03-28T13:20:09Z","timestamp":1774704009788,"version":"3.50.1"},"reference-count":36,"publisher":"MDPI AG","issue":"21","license":[{"start":{"date-parts":[[2022,11,1]],"date-time":"2022-11-01T00:00:00Z","timestamp":1667260800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Key R&amp;D Program of China","award":["2019YFA0606702"],"award-info":[{"award-number":["2019YFA0606702"]}]},{"name":"National Key R&amp;D Program of China","award":["91858202"],"award-info":[{"award-number":["91858202"]}]},{"name":"National Key R&amp;D Program of China","award":["41630963"],"award-info":[{"award-number":["41630963"]}]},{"name":"National Key R&amp;D Program of China","award":["41776003"],"award-info":[{"award-number":["41776003"]}]},{"name":"National Key R&amp;D Program of China","award":["202102245034"],"award-info":[{"award-number":["202102245034"]}]},{"name":"National Key R&amp;D Program of China","award":["IIS-2123264"],"award-info":[{"award-number":["IIS-2123264"]}]},{"name":"National Key R&amp;D Program of China","award":["80NSSC20M0220"],"award-info":[{"award-number":["80NSSC20M0220"]}]},{"name":"National Natural Science Foundation of China","award":["2019YFA0606702"],"award-info":[{"award-number":["2019YFA0606702"]}]},{"name":"National Natural Science Foundation of China","award":["91858202"],"award-info":[{"award-number":["91858202"]}]},{"name":"National Natural Science Foundation of China","award":["41630963"],"award-info":[{"award-number":["41630963"]}]},{"name":"National Natural Science Foundation of China","award":["41776003"],"award-info":[{"award-number":["41776003"]}]},{"name":"National Natural Science Foundation of China","award":["202102245034"],"award-info":[{"award-number":["202102245034"]}]},{"name":"National Natural Science Foundation of China","award":["IIS-2123264"],"award-info":[{"award-number":["IIS-2123264"]}]},{"name":"National Natural Science Foundation of China","award":["80NSSC20M0220"],"award-info":[{"award-number":["80NSSC20M0220"]}]},{"name":"Industry\u2013University Cooperation and Collaborative Education Projects","award":["2019YFA0606702"],"award-info":[{"award-number":["2019YFA0606702"]}]},{"name":"Industry\u2013University Cooperation and Collaborative Education Projects","award":["91858202"],"award-info":[{"award-number":["91858202"]}]},{"name":"Industry\u2013University Cooperation and Collaborative Education Projects","award":["41630963"],"award-info":[{"award-number":["41630963"]}]},{"name":"Industry\u2013University Cooperation and Collaborative Education Projects","award":["41776003"],"award-info":[{"award-number":["41776003"]}]},{"name":"Industry\u2013University Cooperation and Collaborative Education Projects","award":["202102245034"],"award-info":[{"award-number":["202102245034"]}]},{"name":"Industry\u2013University Cooperation and Collaborative Education Projects","award":["IIS-2123264"],"award-info":[{"award-number":["IIS-2123264"]}]},{"name":"Industry\u2013University Cooperation and Collaborative Education Projects","award":["80NSSC20M0220"],"award-info":[{"award-number":["80NSSC20M0220"]}]},{"name":"NSF","award":["2019YFA0606702"],"award-info":[{"award-number":["2019YFA0606702"]}]},{"name":"NSF","award":["91858202"],"award-info":[{"award-number":["91858202"]}]},{"name":"NSF","award":["41630963"],"award-info":[{"award-number":["41630963"]}]},{"name":"NSF","award":["41776003"],"award-info":[{"award-number":["41776003"]}]},{"name":"NSF","award":["202102245034"],"award-info":[{"award-number":["202102245034"]}]},{"name":"NSF","award":["IIS-2123264"],"award-info":[{"award-number":["IIS-2123264"]}]},{"name":"NSF","award":["80NSSC20M0220"],"award-info":[{"award-number":["80NSSC20M0220"]}]},{"name":"NASA","award":["2019YFA0606702"],"award-info":[{"award-number":["2019YFA0606702"]}]},{"name":"NASA","award":["91858202"],"award-info":[{"award-number":["91858202"]}]},{"name":"NASA","award":["41630963"],"award-info":[{"award-number":["41630963"]}]},{"name":"NASA","award":["41776003"],"award-info":[{"award-number":["41776003"]}]},{"name":"NASA","award":["202102245034"],"award-info":[{"award-number":["202102245034"]}]},{"name":"NASA","award":["IIS-2123264"],"award-info":[{"award-number":["IIS-2123264"]}]},{"name":"NASA","award":["80NSSC20M0220"],"award-info":[{"award-number":["80NSSC20M0220"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Floods are the among the most frequent and common natural disasters, causing numerous casualties and extensive property losses worldwide every year. Since flooding areas are often accompanied by cloudy and rainy weather, synthetic aperture radar (SAR) is one of the most powerful sensors for flood monitoring with capabilities of day-and-night and all-weather imaging. However, SAR images are prone to high speckle noise, shadows, and distortions, which affect the accuracy of water body segmentation. To address this issue, we propose a novel Modified DeepLabv3+ model based on the powerful extraction ability of convolutional neural networks for flood mapping from HISEA-1 SAR remote sensing images. Specifically, a lightweight encoder MobileNetv2 is used to improve floodwater detection efficiency, small jagged arrangement atrous convolutions are employed to capture features at small scales and improve pixel utilization, and more upsampling layers are utilized to refine the segmented boundaries of water bodies. The Modified DeepLabv3+ model is then used to analyze two severe flooding events in China and the United States. Results show that Modified DeepLabv3+ outperforms competing semantic segmentation models (SegNet, U-Net, and DeepLabv3+) with respect to the accuracy and efficiency of floodwater extraction. The modified model training resulted in average accuracy, F1, and mIoU scores of 95.74%, 89.31%, and 87.79%, respectively. Further analysis also revealed that Modified DeepLabv3+ is able to accurately distinguish water feature shape and boundary, despite complicated background conditions, while also retaining the highest efficiency by covering 1140 km2 in 5 min. These results demonstrate that this model is a valuable tool for flood monitoring and emergency management.<\/jats:p>","DOI":"10.3390\/rs14215504","type":"journal-article","created":{"date-parts":[[2022,11,2]],"date-time":"2022-11-02T08:15:12Z","timestamp":1667376912000},"page":"5504","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":42,"title":["High-Performance Segmentation for Flood Mapping of HISEA-1 SAR Remote Sensing Images"],"prefix":"10.3390","volume":"14","author":[{"given":"Suna","family":"Lv","sequence":"first","affiliation":[{"name":"State Key Laboratory of Marine Environmental Science, College of Ocean and Earth Sciences, Xiamen University, Xiamen 361102, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5395-1374","authenticated-orcid":false,"given":"Lingsheng","family":"Meng","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Marine Environmental Science, College of Ocean and Earth Sciences, Xiamen University, Xiamen 361102, China"},{"name":"College of Earth, Ocean & Environment, University of Delaware, Newark, DE 19716, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7878-9852","authenticated-orcid":false,"given":"Deanna","family":"Edwing","sequence":"additional","affiliation":[{"name":"College of Earth, Ocean & Environment, University of Delaware, Newark, DE 19716, USA"}]},{"given":"Sihan","family":"Xue","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Marine Environmental Science, College of Ocean and Earth Sciences, Xiamen University, Xiamen 361102, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4935-2838","authenticated-orcid":false,"given":"Xupu","family":"Geng","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Marine Environmental Science, College of Ocean and Earth Sciences, Xiamen University, Xiamen 361102, China"},{"name":"Engineering Research Center of Ocean Remote Sensing Big Data, Fujian Province University, Xiamen 361102, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6578-6970","authenticated-orcid":false,"given":"Xiao-Hai","family":"Yan","sequence":"additional","affiliation":[{"name":"College of Earth, Ocean & Environment, University of Delaware, Newark, DE 19716, USA"},{"name":"Joint Center for Remote Sensing, University of Delaware-Xiamen University, Xiamen 361002, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,11,1]]},"reference":[{"key":"ref_1","first-page":"154","article-title":"Floods losses and hazards in China from 2001 to 2020","volume":"18","author":"Li","year":"2022","journal-title":"Clim. Chang. Res."},{"key":"ref_2","first-page":"416","article-title":"Research progress in forecasting methods of rainstorm and flood disaster in China","volume":"5","author":"Xia","year":"2019","journal-title":"Torrential Rain Disasters"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"713","DOI":"10.1016\/S0031-3203(01)00070-X","article-title":"Segmentation of SAR images","volume":"35","author":"Zaart","year":"2002","journal-title":"Pattern Recognit."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"53","DOI":"10.1016\/j.isprsjprs.2019.10.017","article-title":"A local thresholding approach to flood water delineation using Sentinel-1 SAR imagery","volume":"159","author":"Liang","year":"2020","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"6975","DOI":"10.1109\/TGRS.2017.2737664","article-title":"A Hierarchical Split-Based Approach for Parametric Thresholding of SAR Images: Flood Inundation as a Test Case","volume":"55","author":"Chini","year":"2017","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Lang, F., Yang, J., Yan, S., and Qin, F. (2018). Superpixel Segmentation of Polarimetric Synthetic Aperture Radar (SAR) Images Based on Generalized Mean Shift. Remote Sens., 10.","DOI":"10.3390\/rs10101592"},{"key":"ref_7","first-page":"1","article-title":"Fast Multiscale Superpixel Segmentation for SAR Imagery","volume":"19","author":"Zhang","year":"2022","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Ijitona, B., Ren, J., and Hwang, B. (2014). SAR Sea Ice Image Segmentation Using Watershed with Intensity-Based Region Merging. IEEE Int. Conf. Comput. Inf. Technol., 168\u2013172.","DOI":"10.1109\/CIT.2014.19"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"196","DOI":"10.1016\/j.eswa.2017.04.018","article-title":"River channel segmentation in polarimetric SAR images: Watershed transform combined with average contrast maximisation","volume":"82","author":"Ciecholewski","year":"2017","journal-title":"Expert Syst. Appl."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"2489","DOI":"10.1080\/01431160116902","article-title":"Flood boundary delineation from Synthetic Aperture Radar imagery using a statistical active contour model","volume":"22","author":"Horritt","year":"2001","journal-title":"Int. J. Remote Sens."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"4565","DOI":"10.1109\/JSTARS.2017.2716620","article-title":"Level Set Segmentation Algorithm for High-Resolution Polarimetric SAR Images Based on a Heterogeneous Clutter Model","volume":"10","author":"Jin","year":"2017","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"1171","DOI":"10.1109\/LGRS.2017.2702062","article-title":"A Median Regularized Level Set for Hierarchical Segmentation of SAR Images","volume":"14","author":"Braga","year":"2017","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"529","DOI":"10.5194\/nhess-11-529-2011","article-title":"An algorithm for operational flood mapping from Synthetic Aperture Radar (SAR) data using fuzzy logic","volume":"2","author":"Pulvirenti","year":"2011","journal-title":"Nat. Hazards Earth Syst. Sci."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"5122","DOI":"10.3390\/rs5105122","article-title":"Varying Scale and Capability of Envisat ASAR-WSM, TerraSAR-X Scansar and TerraSAR-X Stripmap Data to Assess Urban Flood Situations: A Case Study of the Mekong Delta in Can Tho Province","volume":"5","author":"Kuenzer","year":"2013","journal-title":"Remote Sens."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"1432","DOI":"10.1109\/TGRS.2007.893568","article-title":"A New Statistical Similarity Measure for Change Detection in Multitemporal SAR Images and Its Extension to Multiscale Change Analysis","volume":"45","author":"Inglada","year":"2007","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"206","DOI":"10.1088\/1748-9326\/9\/3\/035002","article-title":"Flood extent mapping for Namibia using change detection and thresholding with SAR","volume":"9","author":"Long","year":"2014","journal-title":"Environ. Res. Lett."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"152","DOI":"10.1111\/jfr3.12303","article-title":"Multi-temporal synthetic aperture radar flood mapping using change detection","volume":"11","author":"Clement","year":"2018","journal-title":"J. Flood Risk Manag."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). U-Net: Convolutional Networks for Biomedical Image Segmentation. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"2481","DOI":"10.1109\/TPAMI.2016.2644615","article-title":"SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation","volume":"39","author":"Badrinarayanan","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_20","unstructured":"Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A.L. (2015, January 7\u20139). Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. Proceedings of the International Conference on Learning Representations 2015, San Diego, CA, USA."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"834","DOI":"10.1109\/TPAMI.2017.2699184","article-title":"DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs","volume":"40","author":"Chen","year":"2018","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_22","unstructured":"Chen, L.C., Papandreou, G., Schroff, F., and Adam, H. (2017). Rethinking Atrous Convolution for Semantic Image Segmentation. Preprint arXiv."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H. (2018). Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation, Springer.","DOI":"10.1007\/978-3-030-01234-2_49"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Kang, W., Xiang, Y., Wang, F., Wan, L., and You, H. (2018). Flood Detection in Gaofen-3 SAR Images via Fully Convolutional Networks. Sensors, 18.","DOI":"10.3390\/s18092915"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Nemni, E., Bullock, J., Belabbes, S., and Bromley, L. (2020). Fully Convolutional Neural Network for Rapid Flood Segmentation in Synthetic Aperture Radar Imagery. Remote Sens., 12.","DOI":"10.3390\/rs12162532"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Bai, Y., Wu, W., Yang, Z., Yu, J., Zhao, B., Liu, X., Yang, H., Mas, E., and Koshimura, S. (2021). Enhancement of Detecting Permanent Water and Temporary Water in Flood Disasters by Fusing Sentinel-1 and Sentinel-2 Imagery Using Deep Learning Algorithms: Demonstration of Sen1Floods11 Benchmark Datasets. Remote Sens., 13.","DOI":"10.3390\/rs13112220"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Xue, S., Geng, X., Meng, L., Xie, T., Huang, L., and Yan, X.-H. (2021). HISEA-1: The First C-Band SAR Miniaturized Satellite for Ocean and Coastal Observation. Remote Sens., 13.","DOI":"10.3390\/rs13112076"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"1467","DOI":"10.1109\/JPROC.2010.2050290","article-title":"LabelMe: Online Image Annotation and Applications","volume":"98","author":"Torralba","year":"2010","journal-title":"Proc. IEEE"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Chollet, F. (2017, January 21\u201326). Xception: Deep Learning with Depthwise Separable Convolutions. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.195"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015, January 7\u201312). Going Deeper with Convolutions. Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016, January 27\u201330). Rethinking the Inception Architecture for Computer Vision. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.308"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep Residual Learning for Image Recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"1904","DOI":"10.1109\/TPAMI.2015.2389824","article-title":"Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition","volume":"37","author":"He","year":"2015","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Wang, P.Q., Chen, P.F., Yuan, Y., Liu, D., Huang, Z.H., Hou, X.D., and Cottrell, G. (2018, January 12\u201315). Understanding Convolution for Semantic Segmentation. Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV 2018), Lake Tahoe, NV, USA.","DOI":"10.1109\/WACV.2018.00163"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"413","DOI":"10.1007\/s11069-009-9476-y","article-title":"Damage to residential buildings due to flooding of New Orleans after hurricane Katrina","volume":"54","author":"Pistrika","year":"2010","journal-title":"Nat. Hazards"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"11375","DOI":"10.1002\/2016GL071190","article-title":"Unraveling El Nino\u2019s impact on the East Asian Monsoon and Yangtze River summer flooding","volume":"43","author":"Zhang","year":"2016","journal-title":"Geophys. Res. Lett."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/21\/5504\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:08:50Z","timestamp":1760144930000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/21\/5504"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,11,1]]},"references-count":36,"journal-issue":{"issue":"21","published-online":{"date-parts":[[2022,11]]}},"alternative-id":["rs14215504"],"URL":"https:\/\/doi.org\/10.3390\/rs14215504","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,11,1]]}}}