{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,19]],"date-time":"2026-02-19T02:13:43Z","timestamp":1771467223154,"version":"3.50.1"},"reference-count":45,"publisher":"MDPI AG","issue":"14","license":[{"start":{"date-parts":[[2023,7,22]],"date-time":"2023-07-22T00:00:00Z","timestamp":1689984000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Ministry of Science and Technology of China, National Key Research and Development Plan","award":["2022YFD1901400"],"award-info":[{"award-number":["2022YFD1901400"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Semantic segmentation of Polarimetric SAR (PolSAR) images is an important research topic in remote sensing. Many deep neural network-based semantic segmentation methods have been applied to PolSAR image segmentation tasks. However, a lack of effective means to deal with the similarity of object features and speckle noise in PolSAR images exists. Thisstudy aims to improve the discriminative capability of neural networks for various intensities of backscattering coefficients while reducing the effects of noise in PolSAR semantic segmentation tasks. Firstly, we propose pre-processing methods for PolSAR image data, which consist of the fusion of multi-source data and false color mapping. Then, we propose a Multi-axis Sequence Attention Segmentation Network (MASA-SegNet) for semantic segmentation of PolSAR data, which is an encoder\u2013decoder framework. Specifically, within the encoder, a feature extractor is designed and implemented by stacking Multi-axis Sequence Attention blocks to efficiently extract PolSAR features at multiple scales while mitigating inter-class similarities and intra-class differences from speckle noise. Moreover, the process of serialized residual connection design enables the propagation of spatial information throughout the network, thereby improving the overall spatial awareness of MASA-SegNet. Within the decoder, it is used to accomplish the semantic segmentation task. The superiority of this algorithm for semantic segmentation will be explored through feature visualization. The experiments show that our proposed spatial sequence attention mechanism can effectively extract features and reduce noise interference and is thus able to obtain the best results on two large-scale public datasets (the AIR-POlSAR-Seg and FUSAR-Map datasets).<\/jats:p>","DOI":"10.3390\/rs15143662","type":"journal-article","created":{"date-parts":[[2023,7,24]],"date-time":"2023-07-24T01:12:28Z","timestamp":1690161148000},"page":"3662","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":16,"title":["MASA-SegNet: A Semantic Segmentation Network for PolSAR Images"],"prefix":"10.3390","volume":"15","author":[{"given":"Jun","family":"Sun","sequence":"first","affiliation":[{"name":"College of Resources, Sichuan Agricultural University, Chengdu 611130, China"}]},{"given":"Shiqi","family":"Yang","sequence":"additional","affiliation":[{"name":"College of Information Engineering, Sichuan Agricultural University, Ya\u2019an 625000, China"}]},{"given":"Xuesong","family":"Gao","sequence":"additional","affiliation":[{"name":"College of Resources, Sichuan Agricultural University, Chengdu 611130, China"},{"name":"Key Laboratory of Investigation and Monitoring, Protection and Utilization for Cultivated Land Resources, Ministry of Natural Resources, Chengdu 611130, China"}]},{"given":"Dinghua","family":"Ou","sequence":"additional","affiliation":[{"name":"College of Resources, Sichuan Agricultural University, Chengdu 611130, China"},{"name":"Key Laboratory of Investigation and Monitoring, Protection and Utilization for Cultivated Land Resources, Ministry of Natural Resources, Chengdu 611130, China"}]},{"given":"Zhaonan","family":"Tian","sequence":"additional","affiliation":[{"name":"College of Resources, Sichuan Agricultural University, Chengdu 611130, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6372-3618","authenticated-orcid":false,"given":"Jing","family":"Wu","sequence":"additional","affiliation":[{"name":"College of Information Engineering, Sichuan Agricultural University, Ya\u2019an 625000, China"}]},{"given":"Mantao","family":"Wang","sequence":"additional","affiliation":[{"name":"College of Information Engineering, Sichuan Agricultural University, Ya\u2019an 625000, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,7,22]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"6","DOI":"10.1109\/MGRS.2013.2248301","article-title":"A tutorial on synthetic aperture radar","volume":"1","author":"Moreira","year":"2013","journal-title":"IEEE Geosci. Remote Sens. Mag."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"313","DOI":"10.1080\/02757259409532206","article-title":"Speckle filtering of synthetic aperture radar images: A review","volume":"8","author":"Lee","year":"1994","journal-title":"Remote Sens. Rev."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"793","DOI":"10.1109\/TPAMI.2005.106","article-title":"Multiregion level-set partitioning of synthetic aperture radar images","volume":"27","author":"Ayed","year":"2005","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"6781","DOI":"10.1080\/01431161.2014.965282","article-title":"Analysis of l-band sar backscatter and coherence for delineation of land-use\/land-cover","volume":"35","author":"Parihar","year":"2014","journal-title":"Int. J. Remote Sens."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"303","DOI":"10.2528\/PIERB11071106","article-title":"Assessment of l-band sar data at different polarization combinations for crop and other landuse classification","volume":"36","author":"Haldar","year":"2012","journal-title":"Prog. Electromagn. Res. B"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"97","DOI":"10.1016\/j.neucom.2013.01.033","article-title":"Decision fusion of sparse representation and support vector machine for sar image target recognition","volume":"113","author":"Liu","year":"2013","journal-title":"Neurocomputing"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"118","DOI":"10.1016\/j.rse.2014.04.010","article-title":"Random forest classification of salt marsh vegetation habitats using quad-polarimetric airborne sar, elevation and optical rs data","volume":"149","author":"Beijma","year":"2014","journal-title":"Remote Sens. Environ."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"2560","DOI":"10.1109\/TIP.2018.2806201","article-title":"A multi-region segmentation method for sar images based on the multi-texture model with level sets","volume":"27","author":"Luo","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"6601","DOI":"10.1109\/TIP.2020.2992177","article-title":"Polarimetric sar image semantic segmentation with 3d discrete wavelet transform and markov random field","volume":"29","author":"Bi","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Bianchi, F.M., Espeseth, M.M., and Borch, N. (2020). Large-scale detection and categorization of oil spills from sar images with deep learning. Remote Sens., 12.","DOI":"10.3390\/rs12142260"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Jaturapitpornchai, R., Matsuoka, M., Kanemoto, N., Kuzuoka, S., Ito, R., and Nakamura, R. (2019). Newly built construction detection in sar images using deep learning. Remote Sens., 11.","DOI":"10.3390\/rs11121444"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Li, J., Xu, C., Su, H., Gao, L., and Wang, T. (2022). Deep learning for sar ship detection: Past, present and future. Remote Sens., 14.","DOI":"10.3390\/rs14112712"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Cao, H., Zhang, H., Wang, C., and Zhang, B. (2019). Operational flood detection using sentinel-1 sar data over large areas. Water, 11.","DOI":"10.3390\/w11040786"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"223","DOI":"10.1016\/j.isprsjprs.2019.03.015","article-title":"A new fully convolutional neural network for semantic segmentation of polarimetric sar imagery in complex land cover ecosystem","volume":"151","author":"Mohammadimanesh","year":"2019","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Orfanidis, G., Ioannidis, K., Avgerinakis, K., Vrochidis, S., and Kompatsiaris, I. (2018, January 7\u201310). A deep neural network for oil spill semantic segmentation in sar images. Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece.","DOI":"10.1109\/ICIP.2018.8451113"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"834","DOI":"10.1109\/TPAMI.2017.2699184","article-title":"Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs","volume":"40","author":"Chen","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"609","DOI":"10.1109\/JSTARS.2020.2968966","article-title":"Object-based classification of polsar images based on spatial and semantic features","volume":"13","author":"Zou","year":"2020","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens."},{"key":"ref_18","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. (2017). Attention is all you need. Adv. Neural Inf. Process. Syst., 30."},{"key":"ref_19","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16 \u00d7 16 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 12). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Virtual Event.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Sun, J., Zhang, J., Gao, X., Wang, M., Ou, D., Wu, X., and Zhang, D. (2022). Fusing spatial attention with spectral-channel attention mechanism for hyperspectral image classification via encoder\u2013decoder networks. Remote Sens., 14.","DOI":"10.3390\/rs14091968"},{"key":"ref_22","first-page":"5219715","article-title":"Exploring vision transformers for polarimetric sar image classification","volume":"60","author":"Dong","year":"2021","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"4004205","DOI":"10.1109\/LGRS.2023.3239263","article-title":"Local window attention transformer for polarimetric sar image classification","volume":"20","author":"Jamali","year":"2023","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_24","first-page":"4505405","article-title":"High resolution sar image classification using global-local network structure based on vision transformer and cnn","volume":"19","author":"Liu","year":"2022","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Xia, R., Chen, J., Huang, Z., Wan, H., Wu, B., Sun, L., Yao, B., Xiang, H., and Xing, M. (2022). Crtranssar: A visual transformer based on contextual joint representation learning for sar ship detection. Remote Sens., 14.","DOI":"10.3390\/rs14061488"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"16","DOI":"10.1016\/j.isprsjprs.2023.02.011","article-title":"A domain specific knowledge extraction transformer method for multisource satellite-borne sar images ship detection","volume":"198","author":"Zhao","year":"2023","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_27","first-page":"9204","article-title":"Pay attention to mlps","volume":"34","author":"Liu","year":"2021","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Tu, Z., Talebi, H., Zhang, H., Yang, F., Milanfar, P., Bovik, A., and Li, Y. (2022, January 28\u201329). Maxim: Multi-axis mlp for image processing. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Istanbul, Turkey.","DOI":"10.1109\/CVPR52688.2022.00568"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"3830","DOI":"10.1109\/JSTARS.2022.3170326","article-title":"Air-polsar-seg: A large-scale data set for terrain segmentation in complex-scene polsar images","volume":"15","author":"Wang","year":"2022","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"3107","DOI":"10.1109\/JSTARS.2021.3063797","article-title":"Object-level semantic segmentation on the high-resolution gaofen-3 fusar-map dataset","volume":"14","author":"Shi","year":"2021","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Yommy, A.S., Liu, R., and Wu, S. (2015, January 26\u201327). Sar image despeckling using refined lee filter. Proceedings of the 2015 7th International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China.","DOI":"10.1109\/IHMSC.2015.236"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"650","DOI":"10.1117\/1.600657","article-title":"New false color mapping for image fusion","volume":"35","author":"Toet","year":"1996","journal-title":"Opt. Eng."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Chollet, F. (2017, January 21\u201326). Xception: Deep learning with depthwise separable convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.195"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S.R. (2018). Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv.","DOI":"10.18653\/v1\/W18-5446"},{"key":"ref_35","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (July, January 26). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA."},{"key":"ref_36","unstructured":"Chen, L., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H. (2018). Computer Vision\u2013ECCV 2018: 15th European Conference, Munich, Germany, 8\u201314 September 2018, Proceedings, Part VII 15, Springer."},{"key":"ref_37","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015). Medical Image Computing and Computer-Assisted Intervention\u2013MICCAI 2015: 18th International Conference, Munich, Germany, 5\u20139 October 2015, Proceedings, Part III 18, Springer."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"2481","DOI":"10.1109\/TPAMI.2016.2644615","article-title":"Segnet: A deep convolutional encoder-decoder architecture for image segmentation","volume":"39","author":"Badrinarayanan","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Long, J., Shelhamer, E., and Darrell, T. (2015, January 7\u201312). Fully convolutional networks for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"ref_40","unstructured":"Zhu, Z., Xu, M., Bai, S., Huang, T., and Bai, X. (November, January 27). Asymmetric non-local neural networks for semantic segmentation. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., and Lu, H. (2019, January 16\u201320). Dual attention network for scene segmentation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00326"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Zhao, H., Zhang, Y., Liu, S., Shi, J., Loy, C.C., Lin, D., and Jia, J. (2018, January 8\u201314). Psanet: Point-wise spatial attention network for scene parsing. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01240-3_17"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Zhang, H., Dana, K., Shi, J., Zhang, Z., Wang, X., Tyagi, T., and Agrawal, A. (2018, January 18\u201322). Context encoding for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00747"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Wang, X., Girshick, R., Gupta, A., and He, K. (2018, January 18\u201322). Non-local neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00813"},{"key":"ref_45","unstructured":"Islam, M.A., Jia, S., and Bruce, N.D.B. (2020). How much position information do convolutional neural networks encode?. arXiv."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/14\/3662\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T20:17:11Z","timestamp":1760127431000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/14\/3662"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,22]]},"references-count":45,"journal-issue":{"issue":"14","published-online":{"date-parts":[[2023,7]]}},"alternative-id":["rs15143662"],"URL":"https:\/\/doi.org\/10.3390\/rs15143662","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,7,22]]}}}