{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,14]],"date-time":"2026-01-14T19:14:34Z","timestamp":1768418074947,"version":"3.49.0"},"reference-count":35,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2022,2,11]],"date-time":"2022-02-11T00:00:00Z","timestamp":1644537600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Through the Synthetic Aperture Radar (SAR) embarked on the satellites Sentinel-1A and Sentinel-1B of the Copernicus program, a large quantity of observations is routinely acquired over the oceans. A wide range of features from both oceanic (e.g., biological slicks, icebergs, etc.) and meteorologic origin (e.g., rain cells, wind streaks, etc.) are distinguishable on these acquisitions. This paper studies the semantic segmentation of ten metoceanic processes either in the context of a large quantity of image-level groundtruths (i.e., weakly-supervised framework) or of scarce pixel-level groundtruths (i.e., fully-supervised framework). Our main result is that a fully-supervised model outperforms any tested weakly-supervised algorithm. Adding more segmentation examples in the training set would further increase the precision of the predictions. Trained on 20 \u00d7 20 km imagettes acquired from the WV acquisition mode of the Sentinel-1 mission, the model is shown to generalize, under some assumptions, to wide-swath SAR data, which further extents its application domain to coastal areas.<\/jats:p>","DOI":"10.3390\/rs14040851","type":"journal-article","created":{"date-parts":[[2022,2,14]],"date-time":"2022-02-14T03:46:00Z","timestamp":1644810360000},"page":"851","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":17,"title":["Semantic Segmentation of Metoceanic Processes Using SAR Observations and Deep Learning"],"prefix":"10.3390","volume":"14","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-4158-4933","authenticated-orcid":false,"given":"Aur\u00e9lien","family":"Colin","sequence":"first","affiliation":[{"name":"IMT Atlantique, Lab-STICC, UMR CNRS 6285, F-29238 Brest, France"},{"name":"Collecte Localisation Satellites, F-31520 Brest, France"}]},{"given":"Ronan","family":"Fablet","sequence":"additional","affiliation":[{"name":"IMT Atlantique, Lab-STICC, UMR CNRS 6285, F-29238 Brest, France"}]},{"given":"Pierre","family":"Tandeo","sequence":"additional","affiliation":[{"name":"IMT Atlantique, Lab-STICC, UMR CNRS 6285, F-29238 Brest, France"},{"name":"Collecte Localisation Satellites, F-31520 Brest, France"}]},{"given":"Romain","family":"Husson","sequence":"additional","affiliation":[{"name":"Collecte Localisation Satellites, F-31520 Brest, France"}]},{"given":"Charles","family":"Peureux","sequence":"additional","affiliation":[{"name":"Collecte Localisation Satellites, F-31520 Brest, France"}]},{"given":"Nicolas","family":"Long\u00e9p\u00e9","sequence":"additional","affiliation":[{"name":"\u03a6-Lab Explore Office, ESRIN, European Space Agency (ESA), F-00044 Frascati, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1250-4436","authenticated-orcid":false,"given":"Alexis","family":"Mouche","sequence":"additional","affiliation":[{"name":"Laboratoire d\u2019Oceanographie Physique et Spatiale, Ifremer, F-31520 Brest, France"}]}],"member":"1968","published-online":{"date-parts":[[2022,2,11]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1349","DOI":"10.1080\/01431161.2019.1667548","article-title":"Satellite data cloud detection using deep learning supported by hyperspectral data","volume":"41","author":"Sun","year":"2020","journal-title":"Int. J. Remote Sens."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"143","DOI":"10.1109\/MGRS.2020.3046356","article-title":"Deep Learning Meets SAR: Concepts, Models, Pitfalls, and Perspectives","volume":"9","author":"Zhu","year":"2021","journal-title":"IEEE Geosci. Remote Sens. Mag."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Scarpa, G., Gargiulo, M., Mazza, A., and Gaetano, R. (2018). A CNN-Based Fusion Method for Feature Extraction from Sentinel Data. Remote Sens., 10.","DOI":"10.3390\/rs10020236"},{"key":"ref_4","unstructured":"Jackson, C. (2004). Synthetic Aperture Radar Marine User\u2019s Manual."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1584","DOI":"10.1093\/nsr\/nwaa047","article-title":"Deep-learning-based information mining from ocean remote-sensing imagery","volume":"7","author":"Li","year":"2020","journal-title":"Natl. Sci. Rev."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"105716","DOI":"10.1016\/j.asoc.2019.105716","article-title":"Oil spill segmentation in SAR images using convolutional neural networks. A comparative analysis with clustering and logistic regression algorithms","volume":"84","author":"Cantorna","year":"2019","journal-title":"Appl. Soft Comput."},{"key":"ref_7","unstructured":"Dechesne, C., Lef\u00e8vre, S., Vadaine, R., Hajduch, G., and Fablet, R. (2019, January 19\u201321). Multi-task deep learning from Sentinel-1 SAR: Ship detection, classification and length estimation. Proceedings of the BiDS\u201919: Conference on Big Data from Space, Munich, Germany. HAL: hal-02285670."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"5831","DOI":"10.1109\/JSTARS.2021.3074068","article-title":"Prediction of Categorized Sea Ice Concentration From Sentinel-1 SAR Images Based on a Fully Convolutional Network","volume":"14","author":"Colin","year":"2021","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep Residual Learning for Image Recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Lin, T., Maire, M., Belongie, S.J., Bourdev, L.D., Girshick, R.B., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., and Zitnick, C.L. (2014). Microsoft COCO: Common Objects in Context. arXiv.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"ref_12","first-page":"1","article-title":"Classification of Sea Ice Types in Sentinel-1 SAR images","volume":"2019","author":"Park","year":"2019","journal-title":"Cryosphere Discuss."},{"key":"ref_13","unstructured":"Wang, C., Tandeo, P., Mouche, A., Stopa, J.E., Gressani, V., Longepe, N., Vandemark, D., Foster, R.C., and Chapron, B. (2021, December 11). Labeled SAR Imagery Dataset of Ten Geophysical Phenomena from Sentinel-1 Wave Mode (TenGeoP-SARwv). Available online: https:\/\/www.seanoe.org\/data\/00456\/56796\/."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"111457","DOI":"10.1016\/j.rse.2019.111457","article-title":"Classification of the global Sentinel-1 SAR vignettes for ocean surface process studies","volume":"234","author":"Wang","year":"2019","journal-title":"Remote Sens. Environ."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"112031","DOI":"10.1016\/j.rse.2020.112031","article-title":"An assessment of marine atmospheric boundary layer roll detection using Sentinel-1 SAR data","volume":"250","author":"Wang","year":"2020","journal-title":"Remote Sens. Environ."},{"key":"ref_16","unstructured":"He, X., Zemel, R.S., and Carreira-Perpinan, M.A. (July, January 27). Multiscale conditional random fields for image labeling. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2004), Washington, DC, USA."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Xu, J., Schwing, A.G., and Urtasun, R. (July, January 27). Tell Me What You See and I Will Show You Where It Is. Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, USA.","DOI":"10.1109\/CVPR.2014.408"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Perazzi, F., Kr\u00e4henb\u00fchl, P., Pritch, Y., and Hornung, A. (2012, January 16\u201321). Saliency filters: Contrast based filtering for salient region detection. Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA.","DOI":"10.1109\/CVPR.2012.6247743"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"1309","DOI":"10.1109\/TCSVT.2014.2381471","article-title":"Background Prior-Based Salient Object Detection via Deep Reconstruction Residual","volume":"25","author":"Han","year":"2015","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Liu, N., and Han, J. (2016, January 27\u201330). DHSNet: Deep Hierarchical Saliency Network for Salient Object Detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.80"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"137","DOI":"10.1023\/B:VISI.0000013087.49260.fb","article-title":"Robust Real-Time Face Detection","volume":"57","author":"Viola","year":"2004","journal-title":"Int. J. Comput. Vis."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Fleet, D., Pajdla, T., Schiele, B., and Tuytelaars, T. (2014). Visualizing and Understanding Convolutional Networks. Computer Vision\u2014ECCV 2014, Springer. Lecture Notes in Computer Science.","DOI":"10.1007\/978-3-319-10602-1"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"105","DOI":"10.1002\/gdj3.73","article-title":"A labelled ocean SAR imagery dataset of ten geophysical phenomena from Sentinel-1 wave mode","volume":"6","author":"Wang","year":"2019","journal-title":"Geosci. Data J."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"112178","DOI":"10.1016\/j.rse.2020.112178","article-title":"Wind direction retrieval from Sentinel-1 SAR images using ResNet","volume":"253","author":"Zanchetta","year":"2020","journal-title":"Remote Sens. Environ."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"4806","DOI":"10.1109\/ACCESS.2019.2962617","article-title":"The Real-World-Weight Cross-Entropy Loss Function: Modeling the Costs of Mislabeling","volume":"8","author":"Ho","year":"2020","journal-title":"IEEE Access"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016, January 27\u201330). Rethinking the Inception Architecture for Computer Vision. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.308"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"436","DOI":"10.1038\/nature14539","article-title":"Deep Learning","volume":"521","author":"LeCun","year":"2015","journal-title":"Nature"},{"key":"ref_28","unstructured":"Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., and Brendel, W. (2018). ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S.E., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015, January 7\u201312). Going Deeper with Convolutions. Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"ref_30","unstructured":"Krizhevsky, A., Sutskever, I., and Hinton, G.E. (2012, January 3\u20138). ImageNet Classification with Deep Convolutional Neural Networks. Proceedings of the Neural Information Processing Systems, Lake Tahoe, NV, USA."},{"key":"ref_31","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Zhou, B., Khosla, A., Lapedriza, \u00c0., Oliva, A., and Torralba, A. (2016, January 27\u201330). Learning Deep Features for Discriminative Localization. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.319"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Wang, S., Chen, W., Xie, S.M., Azzari, G., and Lobell, D.B. (2020). Weakly Supervised Deep Learning for Segmentation of Remote Sensing Imagery. Remote Sens., 12.","DOI":"10.3390\/rs12020207"},{"key":"ref_34","unstructured":"Rolnick, D., Veit, A., Belongie, S., and Shavit, N. (2017). Deep Learning is Robust to Massive Label Noise. arXiv."},{"key":"ref_35","unstructured":"Hestness, J., Narang, S., Ardalani, N., Diamos, G., Jun, H., Kianinejad, H., Patwary, M.M.A., Yang, Y., and Zhou, Y. (2017). Deep Learning Scaling is Predictable, Empirically. arXiv."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/4\/851\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T22:17:25Z","timestamp":1760134645000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/4\/851"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,2,11]]},"references-count":35,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2022,2]]}},"alternative-id":["rs14040851"],"URL":"https:\/\/doi.org\/10.3390\/rs14040851","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,2,11]]}}}