{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,15]],"date-time":"2026-04-15T02:15:27Z","timestamp":1776219327378,"version":"3.50.1"},"reference-count":25,"publisher":"MDPI AG","issue":"10","license":[{"start":{"date-parts":[[2021,5,13]],"date-time":"2021-05-13T00:00:00Z","timestamp":1620864000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network\u2019s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue.<\/jats:p>","DOI":"10.3390\/rs13101909","type":"journal-article","created":{"date-parts":[[2021,5,14]],"date-time":"2021-05-14T03:28:36Z","timestamp":1620962916000},"page":"1909","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":116,"title":["High-Speed Lightweight Ship Detection Algorithm Based on YOLO-V4 for Three-Channels RGB SAR Image"],"prefix":"10.3390","volume":"13","author":[{"given":"Jiahuan","family":"Jiang","sequence":"first","affiliation":[{"name":"The School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China"}]},{"given":"Xiongjun","family":"Fu","sequence":"additional","affiliation":[{"name":"The School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1123-0090","authenticated-orcid":false,"given":"Rui","family":"Qin","sequence":"additional","affiliation":[{"name":"The School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China"}]},{"given":"Xiaoyan","family":"Wang","sequence":"additional","affiliation":[{"name":"The School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China"}]},{"given":"Zhifeng","family":"Ma","sequence":"additional","affiliation":[{"name":"The School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China"}]}],"member":"1968","published-online":{"date-parts":[[2021,5,13]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1536","DOI":"10.1109\/LGRS.2015.2412174","article-title":"A Bilateral CFAR Algorithm for Ship Detection in SAR Images","volume":"12","author":"Leng","year":"2015","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"194","DOI":"10.1109\/LGRS.2008.915593","article-title":"Using SAR Images to Detect Ships from Sea Clutter","volume":"5","author":"Liao","year":"2008","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_3","unstructured":"Krizhevsky, A., Sutskever, I., and Hinton, G. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, ACM."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014, January 23\u201328). Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA.","DOI":"10.1109\/CVPR.2014.81"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Girshick, R. (2015, January 13\u201316). Fast R-CNN. Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile.","DOI":"10.1109\/ICCV.2015.169"},{"key":"ref_6","unstructured":"Ren, S., He, K., Girshick, R., and Sun, J. (2015). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. Advances in Neural Information Processing Systems, IEEE."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27\u201330). You Only Look Once: Unified, Real-Time Object Detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.91"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S.E., Fu, C.-Y., and Berg, A.C. (2016, January 8\u201316). SSD: Single Shot MultiBox Detector. Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Chen, Z., and Gao, X. (2018, January 9\u201311). An Improved Algorithm for Ship Target Detection in SAR Images Based on Faster R-CNN. Proceedings of the 2018 Ninth International Conference on Intelligent Control and Information Processing (ICICIP), Wanzhou, China.","DOI":"10.1109\/ICICIP.2018.8606720"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Wang, R., Xu, F., Pei, J., Wang, C., Huang, Y., Yang, J., and Wu, J. (August, January 28). An Improved Faster R-CNN Based on MSER Decision Criterion for SAR Image Ship Detection in Harbor. Proceedings of the IGARSS 2019\u20142019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan.","DOI":"10.1109\/IGARSS.2019.8898078"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Zhang, T., and Zhang, X. (2019). High-Speed Ship Detection in SAR Images Based on a Grid Convolutional Neural Network. Remote Sens., 11.","DOI":"10.3390\/rs11101206"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Chang, Y.-L., Anagaw, A., Chang, L., Wang, Y.C., Hsiao, C.-Y., and Lee, W.-H. (2019). Ship Detection Based on YOLOv2 for SAR Imagery. Remote Sens., 11.","DOI":"10.3390\/rs11070786"},{"key":"ref_13","unstructured":"Bochkovskiy, A., Wang, C., and Liao, H.M. (2020). YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv, Available online: https:\/\/arxiv.org\/abs\/2004.10934."},{"key":"ref_14","unstructured":"Redmon, J., and Farhadi, A. (2018). YOLOv3: An incremental improvement. arXiv, Available online: https:\/\/arxiv.org\/abs\/1804.02767."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"303","DOI":"10.1007\/s11263-009-0275-4","article-title":"The Pascal Visual Object Classes (VOC) Challenge","volume":"88","author":"Everingham","year":"2010","journal-title":"Int. J. Comput. Vis."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Fleet, D., Pajdla, T., Schiele, B., and Tuytelaars, T. (2014). Microsoft COCO: Common Objects in Context. Computer Vision ECCV 2014. ECCV 2014, Springer. Lecture Notes in Computer Science.","DOI":"10.1007\/978-3-319-10578-9"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Wang, C.-Y., Liao, H.-Y.M., Wu, Y.-H., Chen, P.-Y., Hsieh, Y.-W., and Yeh, I.-H. (2020, January 14\u201319). CSPNet: A New Backbone that can Enhance Learning Capability of CNN. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Seattle, WA, USA.","DOI":"10.1109\/CVPRW50498.2020.00203"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Liu, S., Qi, L., Qin, H., Shi, J., and Jia, J. (2018, January 18\u201323). Path Aggregation Network for Instance Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00913"},{"key":"ref_19","first-page":"4760","article-title":"PolSAR Image Classification Based on Low-Frequency and Contour Subbands-Driven Polarimetric SENet","volume":"13","author":"Qin","year":"2020","journal-title":"IEEE J. Stars"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"3089","DOI":"10.1109\/TIP.2006.877507","article-title":"The Nonsubsampled Contourlet Transform: Theory, Design, and Applications","volume":"15","author":"Zhou","year":"2006","journal-title":"IEEE Trans. Image Process."},{"key":"ref_21","unstructured":"(2021, March 28). How Fast Is My Model?. Available online: https:\/\/machinethink.net\/blog\/how-fast-is-my-model\/."},{"key":"ref_22","unstructured":"Molchanov, P., Tyree, S., Karras, T., Aila, T., and Kautz, J. (2016). Pruning Convolutional Neural Networks for Resource Efficient Inference 14. arXiv, Available online: https:\/\/arxiv.org\/abs\/1611.06440."},{"key":"ref_23","unstructured":"(2021, March 28). Darknet. Available online: https:\/\/github.com\/AlexeyAB\/darknet."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Zhang, T., Zhang, X., Shi, J., and Wei, S. (2019). Depthwise Separable Convolution Neural Network for High-Speed SAR Ship Detection. Remote Sens., 11.","DOI":"10.3390\/rs11212483"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Li, J., Qu, C., and Shao, J. (2017, January 13\u201314). Ship detection in SAR images based on an improved faster R-CNN. Proceedings of the 2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA), Beijing, China.","DOI":"10.1109\/BIGSARDATA.2017.8124934"}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/13\/10\/1909\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T06:00:23Z","timestamp":1760162423000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/13\/10\/1909"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,5,13]]},"references-count":25,"journal-issue":{"issue":"10","published-online":{"date-parts":[[2021,5]]}},"alternative-id":["rs13101909"],"URL":"https:\/\/doi.org\/10.3390\/rs13101909","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,5,13]]}}}