{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,4]],"date-time":"2025-11-04T11:04:34Z","timestamp":1762254274497,"version":"build-2065373602"},"reference-count":34,"publisher":"MDPI AG","issue":"19","license":[{"start":{"date-parts":[[2022,9,25]],"date-time":"2022-09-25T00:00:00Z","timestamp":1664064000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"High Technology Ship Research and Development Program of the Ministry of Industry and Information Technology of China","award":["CJ02N20","62127806","NSFC U1905212"],"award-info":[{"award-number":["CJ02N20","62127806","NSFC U1905212"]}]},{"name":"National Natural Science Foundation of China","award":["CJ02N20","62127806","NSFC U1905212"],"award-info":[{"award-number":["CJ02N20","62127806","NSFC U1905212"]}]},{"name":"United Fund for Promoting Cross-straits Scientific and Technological Cooperation from the National Natural Science Foundation of China","award":["CJ02N20","62127806","NSFC U1905212"],"award-info":[{"award-number":["CJ02N20","62127806","NSFC U1905212"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>In the engine room of intelligent ships, visual recognition is an essential technical precondition for automatic inspection. At present, the problems of visual recognition in marine engine rooms include missing detection, low accuracy, slow speed, and imperfect datasets. For these problems, this paper proposes a marine engine room equipment recognition model based on the improved You Only Look Once v5 (YOLOv5) algorithm. The channel pruning method based on batch normalization (BN) layer weight value is used to improve the recognition speed. The complete intersection over union (CIoU) loss function and hard-swish activation function are used to enhance detection accuracy. Meanwhile, soft-NMS is used as the non-maximum suppression (NMS) method to reduce the false rate and missed detection rate. Then, the main equipment in the marine engine room (MEMER) dataset is built. Finally, comparative experiments and ablation experiments are carried out on the MEMER dataset to verify the strategy\u2019s efficacy on the model performance boost. Specifically, this model can accurately detect 100.00% of diesel engines, 95.91% of pumps, 94.29% of coolers, 98.54% of oil separators, 64.21% of meters, 60.23% of reservoirs, and 75.32% of valves in the actual marine engine room.<\/jats:p>","DOI":"10.3390\/s22197261","type":"journal-article","created":{"date-parts":[[2022,9,26]],"date-time":"2022-09-26T03:34:17Z","timestamp":1664163257000},"page":"7261","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["Research on the Application of Visual Recognition in the Engine Room of Intelligent Ships"],"prefix":"10.3390","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2991-2412","authenticated-orcid":false,"given":"Di","family":"Shang","sequence":"first","affiliation":[{"name":"College of Marine Engineering, Dalian Maritime University, Dalian 116026, China"}]},{"given":"Jundong","family":"Zhang","sequence":"additional","affiliation":[{"name":"College of Marine Engineering, Dalian Maritime University, Dalian 116026, China"}]},{"given":"Kunxin","family":"Zhou","sequence":"additional","affiliation":[{"name":"College of Marine Engineering, Dalian Maritime University, Dalian 116026, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1688-0367","authenticated-orcid":false,"given":"Tianjian","family":"Wang","sequence":"additional","affiliation":[{"name":"College of Marine Engineering, Dalian Maritime University, Dalian 116026, China"}]},{"given":"Jiahao","family":"Qi","sequence":"additional","affiliation":[{"name":"China Classification Society Dalian Branch, Dalian 116001, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,9,25]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.joes.2021.03.001","article-title":"Ship behavior prediction via trajectory extraction-based clustering for maritime situation awareness","volume":"7","author":"Murray","year":"2022","journal-title":"J. Ocean Eng. Sci."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"107793","DOI":"10.1016\/j.oceaneng.2020.107793","article-title":"Global path planning for autonomous ship: A hybrid approach of Fast Marching Square and velocity obstacles methods","volume":"214","author":"Chen","year":"2020","journal-title":"Ocean Eng."},{"unstructured":"Ioffe, S., and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. International Conference on Machine Learning, PMLR.","key":"ref_3"},{"unstructured":"Bovcon, B., and Kristan, M. (2021). WaSR\u2014A Water Segmentation and Refinement Maritime Obstacle Detection Network. IEEE Trans. Cybern., 1\u201314.","key":"ref_4"},{"key":"ref_5","first-page":"1407","article-title":"Detection and tracking for the awareness of surroundings of a ship based on deep learning","volume":"8","author":"Lee","year":"2021","journal-title":"J. Comput. Des. Eng."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"677","DOI":"10.1109\/TPAMI.2016.2599174","article-title":"Long-Term Recurrent Convolutional Networks for Visual Recognition and Description","volume":"39","author":"Donahue","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"781","DOI":"10.1109\/TCSVT.2019.2897980","article-title":"Saliency-Aware Convolution Neural Network for Ship Detection in Surveillance Video","volume":"30","author":"Shao","year":"2020","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"102907","DOI":"10.1109\/ACCESS.2020.2997466","article-title":"Research on Recognition of Fly Species Based on Improved RetinaNet and CBAM","volume":"8","author":"Chen","year":"2020","journal-title":"IEEE Access"},{"doi-asserted-by":"crossref","unstructured":"Zheng, G., Zhao, J., Li, S., and Feng, J. (2021). Zero-Shot Pipeline Detection for Sub-Bottom Profiler Data Based on Imaging Principles. Remote Sens., 13.","key":"ref_9","DOI":"10.3390\/rs13214401"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"18832","DOI":"10.1109\/ACCESS.2019.2962823","article-title":"Detection and Analysis of Behavior Trajectory for Sea Cucumbers Based on Deep Learning","volume":"8","author":"Li","year":"2020","journal-title":"IEEE Access"},{"doi-asserted-by":"crossref","unstructured":"Neubeck, A., and Van Gool, L. (2006, January 20\u201324). Efficient non-maximum suppression. Proceedings of the 18th International Conference on Pattern Recognition, Hong Kong, China.","key":"ref_11","DOI":"10.1109\/ICPR.2006.479"},{"doi-asserted-by":"crossref","unstructured":"Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014, January 23\u201328). Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA.","key":"ref_12","DOI":"10.1109\/CVPR.2014.81"},{"doi-asserted-by":"crossref","unstructured":"Zhu, C., He, Y., and Savvides, M. (2019, January 15\u201320). Feature selective anchor-free module for single-shot object detection. Proceedings of the 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","key":"ref_13","DOI":"10.1109\/CVPR.2019.00093"},{"doi-asserted-by":"crossref","unstructured":"Zhu, C., Chen, F., Shen, Z., and Savvides, M. (2020). Soft anchor-point object detection. European Conference on Computer Vision, Springer.","key":"ref_14","DOI":"10.1007\/978-3-030-58545-7_6"},{"doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Doll\u00e1r, P., Girshick, R., He, K., Hariharan, B., and Belongie, S. (2017, January 21\u201326). Feature Pyramid Networks for Object Detection. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","key":"ref_15","DOI":"10.1109\/CVPR.2017.106"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"012060","DOI":"10.1088\/1742-6596\/2173\/1\/012060","article-title":"Detection of Auxiliary Equipment in Engine Room Based on Improved SSD","volume":"2173","author":"Qi","year":"2022","journal-title":"J. Phys. Conf. Ser."},{"doi-asserted-by":"crossref","unstructured":"Qi, J., Zhang, J., and Meng, Q. (2021). Auxiliary Equipment Detection in Marine Engine Rooms Based on Deep Learning Model. J. Mar. Sci. Eng., 9.","key":"ref_17","DOI":"10.3390\/jmse9091006"},{"doi-asserted-by":"crossref","unstructured":"Zhu, X., Lyu, S., Wang, X., and Zhao, Q. (2021, January 11\u201317). TPH-YOLOv5: Improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios. Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","key":"ref_18","DOI":"10.1109\/ICCVW54120.2021.00312"},{"doi-asserted-by":"crossref","unstructured":"Guo, Z., Wang, C., Yang, G., Huang, Z., and Li, G. (2022). MSFT-YOLO: Improved YOLOv5 Based on Transformer for Detecting Defects of Steel Surface. Sensors, 22.","key":"ref_19","DOI":"10.3390\/s22093467"},{"doi-asserted-by":"crossref","unstructured":"Ting, L., Baijun, Z., Yongsheng, Z., and Shun, Y. (2021, January 15\u201317). Ship detection algorithm based on improved YOLO V5. Proceedings of the 2021 6th International Conference on Automation, Control and Robotics Engineering (CACRE), Dalian, China.","key":"ref_20","DOI":"10.1109\/CACRE52464.2021.9501331"},{"unstructured":"Han, S., Pool, J., Tran, J., and Dally, W. (2015). Learning both weights and connections for efficient neural network. Adv. Neural Inf. Process. Syst., 28, Available online: https:\/\/proceedings.neurips.cc\/paper\/2015\/file\/ae0eb3eed39d2bcef4622b2499a05fe6-Paper.pdf.","key":"ref_21"},{"doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27\u201330). You only look once: Unified, real-time object detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","key":"ref_22","DOI":"10.1109\/CVPR.2016.91"},{"doi-asserted-by":"crossref","unstructured":"Wang, C.Y., Liao HY, M., Wu, Y.H., Chen, P.Y., Hsieh, J.W., and Yeh, I.H. (2020, January 14\u201319). CSPNet: A new backbone that can enhance learning capability of CNN. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA.","key":"ref_23","DOI":"10.1109\/CVPRW50498.2020.00203"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"1904","DOI":"10.1109\/TPAMI.2015.2389824","article-title":"Spatial pyramid pooling in deep convolutional networks for visual recognition","volume":"37","author":"He","year":"2015","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"2124","DOI":"10.1049\/ipr2.12477","article-title":"Gesture recognition based on modified Yolov5s","volume":"16","author":"Hu","year":"2022","journal-title":"IET Image Process."},{"doi-asserted-by":"crossref","unstructured":"Rezatofighi, H., Tsoi, N., Gwak, J., Sadeghian, A., Reid, I., and Savarese, S. (2019, January 15\u201320). Generalized intersection over union: A metric and a loss for bounding box regression. Proceedings of the 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","key":"ref_26","DOI":"10.1109\/CVPR.2019.00075"},{"unstructured":"Han, S., Mao, H., and Dally, W.J. (2015). Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv.","key":"ref_27"},{"doi-asserted-by":"crossref","unstructured":"Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., and Zhang, C. (2017, January 22\u201329). Learning efficient convolutional networks through network slimming. Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Italy.","key":"ref_28","DOI":"10.1109\/ICCV.2017.298"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"181","DOI":"10.1016\/j.ipl.2005.11.003","article-title":"Weighted random sampling with a reservoir","volume":"97","author":"Efraimidis","year":"2006","journal-title":"Inf. Process. Lett."},{"key":"ref_30","first-page":"12993","article-title":"Distance-IoU loss: Faster and better learning for bounding box regression","volume":"34","author":"Zheng","year":"2020","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"doi-asserted-by":"crossref","unstructured":"Bodla, N., Singh, B., Chellappa, R., and Davis, L. (2017, January 22\u201329). SSoft-NMS--improving object detection with one line of code. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","key":"ref_31","DOI":"10.1109\/ICCV.2017.593"},{"key":"ref_32","first-page":"13001","article-title":"Random erasing data augmentation","volume":"34","author":"Zhong","year":"2020","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"104327","DOI":"10.1016\/j.tust.2021.104327","article-title":"Automatic recognition and classification of microseismic waveforms based on computer vision","volume":"121","author":"Li","year":"2022","journal-title":"Tunn. Undergr. Space Technol."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"102827","DOI":"10.1016\/j.cviu.2019.102827","article-title":"ASSD: Attentive single shot multibox detector","volume":"189","author":"Yi","year":"2019","journal-title":"Comput. Vis. Image Underst."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/19\/7261\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:39:05Z","timestamp":1760143145000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/19\/7261"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,25]]},"references-count":34,"journal-issue":{"issue":"19","published-online":{"date-parts":[[2022,10]]}},"alternative-id":["s22197261"],"URL":"https:\/\/doi.org\/10.3390\/s22197261","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2022,9,25]]}}}