{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,24]],"date-time":"2025-10-24T08:32:29Z","timestamp":1761294749862,"version":"build-2065373602"},"reference-count":32,"publisher":"MDPI AG","issue":"13","license":[{"start":{"date-parts":[[2024,6,26]],"date-time":"2024-06-26T00:00:00Z","timestamp":1719360000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>SAR (synthetic aperture radar) ship detection is a hot topic due to the breadth of its application. However, limited by the volume of the SAR image, the generalization ability of the detector is low, which makes it difficult to adapt to new scenes. Although many data augmentation methods\u2014for example, clipping, pasting, and mixing\u2014are used, the accuracy is improved little. In order to solve this problem, the adversarial training is used for data generation in this paper. Perturbation is added to the SAR image to generate new samples for training, and it can make the detector learn more abundant features and promote the robustness of the detector. By separating batch normalization between clean samples and disturbed images, the performance degradation on clean samples is avoided. By simultaneously perturbing and selecting large losses of classification and location, it can keep the detector adaptable to more confrontational samples. The optimization efficiency and results are improved through K-step average perturbation and one-step gradient descent. The experiments on different detectors show that the proposed method achieves 8%, 10%, and 17% AP (Average Precision) improvement on the SSDD, SAR-Ship-Dataset, and AIR-SARShip, compared to the traditional data augmentation methods.<\/jats:p>","DOI":"10.3390\/s24134154","type":"journal-article","created":{"date-parts":[[2024,6,26]],"date-time":"2024-06-26T09:29:33Z","timestamp":1719394173000},"page":"4154","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["A SAR Ship Detection Method Based on Adversarial Training"],"prefix":"10.3390","volume":"24","author":[{"given":"Jianwei","family":"Li","sequence":"first","affiliation":[{"name":"Naval Submarine Academy, Qingdao 264001, China"}]},{"given":"Zhentao","family":"Yu","sequence":"additional","affiliation":[{"name":"Naval Submarine Academy, Qingdao 264001, China"}]},{"given":"Jie","family":"Chen","sequence":"additional","affiliation":[{"name":"Naval Submarine Academy, Qingdao 264001, China"}]},{"given":"Hao","family":"Jiang","sequence":"additional","affiliation":[{"name":"Naval Submarine Academy, Qingdao 264001, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,6,26]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"2711","DOI":"10.1109\/TGRS.2010.2041239","article-title":"An efficient and flexible statistical model based on generalized Gamma distribution for amplitude SAR images","volume":"48","author":"Li","year":"2010","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"2686","DOI":"10.1109\/TIP.2006.877362","article-title":"SAR image filtering based on the heavy-tailed Rayleigh model","volume":"15","author":"Achim","year":"2006","journal-title":"IEEE Trans. Image Process."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Li, J., Qu, C., and Shao, J. (2017, January 13\u201314). Ship detection in SAR images based on an improved faster R-CNN. Proceedings of the SAR in Big Data Era: Models, Methods & Applications, Beijing, China.","DOI":"10.1109\/BIGSARDATA.2017.8124934"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Li, J., Xu, C., Su, H., Gao, L., and Wang, T. (2022). Deep Learning for SAR Ship Detection: Past, Present and Future. Remote Sens., 14.","DOI":"10.3390\/rs14112712"},{"key":"ref_5","unstructured":"Yang, S., Xiao, W., Zhang, M., Guo, S., Zhao, J., and Shen, F. (2022). Image Data Augmentation for Deep Learning: A Survey. arXiv."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Carlini, N., and Wagner, D. (2017, January 22\u201326). Towards evaluating the robustness of neural networks. Proceedings of the IEEE Symposium on Security and Privacy, San Jose, CA, USA.","DOI":"10.1109\/SP.2017.49"},{"key":"ref_7","unstructured":"Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Xie, Q., Luong, M.-T., Hovy, E., and Le, Q.V. (2020, January 14\u201319). Self-training with noisy student improves imagenet classification. Proceedings of the Computer Vision and Pattern Recognition, Virtual.","DOI":"10.1109\/CVPR42600.2020.01070"},{"key":"ref_9","unstructured":"Goodfellow, I., Shlens, J., and Szegedy, C. (2015). Explaining and harnessing adversarial examples. arXiv."},{"key":"ref_10","unstructured":"Yin, D., Gontijo Lopes, R., Shlens, J., Cubuk, E.D., and Gilmer, J. (2019). A fourier perspective on model robustness in computer vision. arXiv."},{"key":"ref_11","first-page":"7502","article-title":"Interpreting adversarially trained convolutional neural networks","volume":"97","author":"Zhang","year":"2019","journal-title":"Int. Conf. Mach. Learn. PMLR"},{"key":"ref_12","unstructured":"Kurakin, A., Goodfellow, I., and Bengio, S. (2016). Adversarial machine learning at scale. arXiv."},{"key":"ref_13","unstructured":"Zhang, H., and Wang, J. (November, January 27). Towards adversarially robust object detection. Proceedings of the International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"19843","DOI":"10.1007\/s10489-023-04532-5","article-title":"Exploring misclassifications of robust neural networks to enhance adversarial attacks","volume":"53","author":"Schwinn","year":"2023","journal-title":"Appl. Intell."},{"key":"ref_15","unstructured":"Shafahi, A., Najibi, M., Ghiasi, M.A., Xu, Z., Dickerson, J., Studer, C., Davis, L.S., Taylor, G., and Goldstein, T. (2019). Adversarial training for free!. arXiv."},{"key":"ref_16","unstructured":"Zhang, D., Zhang, T., Lu, Y., Zhu, Z., and Dong, B. (2019). You only propagate once: Accelerating adversarial training via maximal principle. arXiv."},{"key":"ref_17","unstructured":"Zhu, C., Cheng, Y., Gan, Z., Sun, S., Goldstein, T., and Liu, J. (2019). Freelb: Enhanced adversarial training for natural language understanding. arXiv."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"1137","DOI":"10.1109\/TPAMI.2016.2577031","article-title":"Faster R-CNN: Towards real-time object detection with region proposal networks","volume":"39","author":"Ren","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_19","unstructured":"Redmon, J., and Farhadi, A. (2018). YOLOv3: An incremental improvement. arXiv."},{"key":"ref_20","first-page":"2999","article-title":"Focal Loss for Dense Object Detection","volume":"99","author":"Lin","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_21","unstructured":"Duan, K., Bai, S., Xie, L., Qi, H., Huang, Q., and Tian, Q. (November, January 27). Centernet: Keypoint triplets for object detection. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Wang, Y., Wang, C., Zhang, H., Dong, Y., and Wei, S. (2019). A SAR Dataset of Ship Detection for Deep Learning under Complex Backgrounds. Remote Sens., 11.","DOI":"10.3390\/rs11070765"},{"key":"ref_23","first-page":"852","article-title":"AIR-SARShip-1.0: High-resolution SAR ship detection dataset","volume":"8","author":"Sun","year":"2019","journal-title":"J. Radars"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1186\/s40537-019-0197-0","article-title":"A survey on image data augmentation for deep learning","volume":"6","author":"Shorten","year":"2019","journal-title":"J. Big Data"},{"key":"ref_25","unstructured":"DeVries, T., and Taylor, G.W. (2017). Improved regularization of convolutional neural networks with cutout. arXiv."},{"key":"ref_26","first-page":"13001","article-title":"Random erasing data augmentation","volume":"34","author":"Zhong","year":"2020","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Zhang, H., Cisse, M., Dauphin, Y.N., and Lopez-Paz, D. (2017). mixup: Beyond empirical risk minimization. arXiv.","DOI":"10.1007\/978-1-4899-7687-1_79"},{"key":"ref_28","unstructured":"Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., and Yoo, Y. (November, January 27). Cutmix: Regularization strategy to train strong classifiers with localizable features. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_29","unstructured":"Hendrycks, D., Mu, N., Cubuk, E.D., Zoph, B., Gilmer, J., and Lakshminarayanan, B. (2019). Augmix: A simple data processing method to improve robustness and uncertainty. arXiv."},{"key":"ref_30","unstructured":"Chen, P., Liu, S., Zhao, H., and Jia, J. (2020). Gridmask data augmentation. arXiv."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Cubuk, E.D., Zoph, B., Mane, D., Vasudevan, V., and Le, Q.V. (2019, January 16\u201320). Autoaugment: Learning augmentation strategies from data. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00020"},{"key":"ref_32","unstructured":"Li, P., Li, X., and Long, X. (2020). Fencemask: A data augmentation approach for pre-extracted image features. arXiv."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/24\/13\/4154\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T15:05:02Z","timestamp":1760108702000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/24\/13\/4154"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,26]]},"references-count":32,"journal-issue":{"issue":"13","published-online":{"date-parts":[[2024,7]]}},"alternative-id":["s24134154"],"URL":"https:\/\/doi.org\/10.3390\/s24134154","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2024,6,26]]}}}