{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,7,30]],"date-time":"2025-07-30T15:04:53Z","timestamp":1753887893540,"version":"3.41.2"},"reference-count":37,"publisher":"Wiley","issue":"1","license":[{"start":{"date-parts":[[2021,11,11]],"date-time":"2021-11-11T00:00:00Z","timestamp":1636588800000},"content-version":"vor","delay-in-days":314,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100012226","name":"Fundamental Research Funds for the Central Universities","doi-asserted-by":"publisher","award":["3132019400"],"award-info":[{"award-number":["3132019400"]}],"id":[{"id":"10.13039\/501100012226","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["Computational Intelligence and Neuroscience"],"published-print":{"date-parts":[[2021,1]]},"abstract":"<jats:p>At night, buoys and other navigation marks disappear to be replaced by fixed or flashing lights. Navigation marks are seen as a set of lights in various colors rather than their familiar outline. Deciphering that the meaning of the lights is a burden to navigators, it is also a new challenging research direction of intelligent sensing of navigation environment. The study studied initiatively the intelligent recognition of lights on navigation marks at night based on multilabel video classification methods. To capture effectively the characteristics of navigation mark\u2019s lights, including both color and flashing phase, three different multilabel classification models based on binary relevance, label power set, and adapted algorithm were investigated and compared. According to the experiment\u2019s results performed on a data set with 8000\u2009minutes video, the model based on binary relevance, named NMLNet, has highest accuracy about 99.23% to classify 9 types of navigation mark\u2019s lights. It also has the fastest computation speed with least network parameters. In the NMLNet, there are two branches for the classifications of color and flashing, respectively, and for the flashing classification, an improved MobileNet\u2010v2 was used to capture the brightness characteristic of lights in each video frame, and an LSTM is used to capture the temporal dynamics of lights. Aiming to run on mobile devices on vessel, the MobileNet\u2010v2 was used as backbone, and with the improvement of spatial attention mechanism, it achieved the accuracy near Resnet\u201050 while keeping its high speed.<\/jats:p>","DOI":"10.1155\/2021\/6794202","type":"journal-article","created":{"date-parts":[[2021,11,11]],"date-time":"2021-11-11T23:35:13Z","timestamp":1636673713000},"update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Multilabel Video Classification Model of Navigation Mark\u2019s Lights Based on Deep Learning"],"prefix":"10.1155","volume":"2021","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4573-0624","authenticated-orcid":false,"given":"Xu","family":"Han","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1327-6105","authenticated-orcid":false,"given":"Mingyang","family":"Pan","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5766-492X","authenticated-orcid":false,"given":"Haipeng","family":"Ge","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9394-6142","authenticated-orcid":false,"given":"Shaoxi","family":"Li","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6609-8552","authenticated-orcid":false,"given":"Jingfeng","family":"Hu","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4662-1972","authenticated-orcid":false,"given":"Lining","family":"Zhao","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7268-8981","authenticated-orcid":false,"given":"Yu","family":"Li","sequence":"additional","affiliation":[]}],"member":"311","published-online":{"date-parts":[[2021,11,11]]},"reference":[{"key":"e_1_2_10_1_2","doi-asserted-by":"crossref","unstructured":"Garcia-DominguezA. Mobile applications cloud and bigdata on ships and shore stations for increased safety on marine traffic; a smart ship project Proceedings of 2015 IEEE International Conference on Industrial Technology (ICIT) 2015 Seville Spain 1532\u20131537 https:\/\/doi.org\/10.1109\/icit.2015.7125314 2-s2.0-84937721027.","DOI":"10.1109\/ICIT.2015.7125314"},{"key":"e_1_2_10_2_2","doi-asserted-by":"crossref","unstructured":"TangY.andShaoN. Design and research of integrated information platform for smart ship Proceedings of the 4th International Conference on Transportation Information and Safety (ICTIS) 2017 Alberta Canada 37\u201341 https:\/\/doi.org\/10.1109\/ictis.2017.8047739 2-s2.0-85032795782.","DOI":"10.1109\/ICTIS.2017.8047739"},{"key":"e_1_2_10_3_2","doi-asserted-by":"crossref","unstructured":"PandeyJ.andHasegawaK. Autonomous navigation of catamaran surface vessel Proceedings of the 2017 IEEE Underwater Technology (UT) 2017 Busan South Korea 1\u20136 https:\/\/doi.org\/10.1109\/ut.2017.7890342 2-s2.0-85018190352.","DOI":"10.1109\/UT.2017.7890342"},{"key":"e_1_2_10_4_2","doi-asserted-by":"publisher","DOI":"10.1007\/s13344-016-0056-0"},{"key":"e_1_2_10_5_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.procs.2018.07.148"},{"key":"e_1_2_10_6_2","unstructured":"International Dictionary of Marine Aids to Navigation https:\/\/www.ialaaism.org\/wiki\/dictionary\/index.php\/Aid_to_Navigation."},{"key":"e_1_2_10_7_2","first-page":"1663","article-title":"Ship tracking and recognition based on Darknet network and YOLOv3 algorithm","author":"Liu B.","year":"2019","journal-title":"Journal of Computer Applications"},{"key":"e_1_2_10_8_2","doi-asserted-by":"crossref","unstructured":"FuH. LiY. WangY. andLiP. Maritime ship targets recognition with deep learning Proceedings of the 37th Chinese Control Conference (CCC) 2018 Wuhan China 9297\u20139302 https:\/\/doi.org\/10.23919\/chicc.2018.8484085 2-s2.0-85056096096.","DOI":"10.23919\/ChiCC.2018.8484085"},{"key":"e_1_2_10_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/access.2020.2973856"},{"key":"e_1_2_10_10_2","doi-asserted-by":"publisher","DOI":"10.1109\/tits.2015.2509509"},{"key":"e_1_2_10_11_2","unstructured":"BadueC. GuidoliniR. andCarneiroR. V. Self-driving cars: a survey 2019 http:\/\/arxiv.org\/abs\/1901.04407."},{"key":"e_1_2_10_12_2","doi-asserted-by":"crossref","unstructured":"JensenM. B. NasrollahiK. andMoeslundT. B. Evaluating state-of-the-art object detector on challenging traffic light data Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2017 Honolulu HI USA 882\u2013888 https:\/\/doi.org\/10.1109\/cvprw.2017.122 2-s2.0-85030230170.","DOI":"10.1109\/CVPRW.2017.122"},{"key":"e_1_2_10_13_2","doi-asserted-by":"crossref","unstructured":"Diaz-CabreraM. CerriP. andSanchez-MedinaJ. Suspended traffic lights detection and distance estimation using color features Proceedings of 2012 15th International IEEE Conference on Intelligent Transportation Systems 2012 Anchorage AK USA 1315\u20131320 https:\/\/doi.org\/10.1109\/itsc.2012.6338765 2-s2.0-84871227730.","DOI":"10.1109\/ITSC.2012.6338765"},{"key":"e_1_2_10_14_2","doi-asserted-by":"crossref","unstructured":"ZhangY. XueJ. ZhangG. ZhangY.et al. A multi-feature fusion based traffic light recognition algorithm for intelligent vehicles in Proceedings of the 33rd Chinese Control Conference 2014 4924\u20134929 https:\/\/doi.org\/10.1109\/chicc.2014.6895775 2-s2.0-84907938404.","DOI":"10.1109\/ChiCC.2014.6895775"},{"key":"e_1_2_10_15_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.sysarc.2013.12.003"},{"key":"e_1_2_10_16_2","doi-asserted-by":"crossref","unstructured":"WeberM. WolfP. andZollnerJ. M. DeepTLR: a single deep convolutional network for detection and classification of traffic lights Proceedings of 2016 IEEE Intelligent Vehicles Symposium (IV) 2016 Gotenburg Sweden 342\u2013348.","DOI":"10.1109\/IVS.2016.7535408"},{"key":"e_1_2_10_17_2","doi-asserted-by":"crossref","unstructured":"BehrendtK. NovakL. andBotrosR. A deep learning approach to traffic lights: detection tracking and classification Proceedings of 2017 IEEE International Conference on Robotics and Automation (ICRA) 2017 Singapore 1370\u20131377 https:\/\/doi.org\/10.1109\/icra.2017.7989163 2-s2.0-85027992778.","DOI":"10.1109\/ICRA.2017.7989163"},{"key":"e_1_2_10_18_2","doi-asserted-by":"crossref","unstructured":"M\u00fcllerJ.andDietmayerK. Detecting traffic lights by single shot detection 2018 http:\/\/arxiv.org\/abs\/1805.02523.","DOI":"10.1109\/ITSC.2018.8569683"},{"key":"e_1_2_10_19_2","doi-asserted-by":"crossref","unstructured":"ZhaS. LuisierF. andAndrewsW. Exploiting image-trained CNN architectures for unconstrained video classification 60 Proceedings of the British Machine Vision Conference 2015 2015 Swansea UK no. 13 1\u201360 https:\/\/doi.org\/10.5244\/c.29.60.","DOI":"10.5244\/C.29.60"},{"key":"e_1_2_10_20_2","doi-asserted-by":"crossref","unstructured":"KarpathyA. TodericiG. ShettyS. andLeungT. Large-scale video classification with convolutional neural networks Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition 2014 Columbus OH USA 1725\u20131732 https:\/\/doi.org\/10.1109\/cvpr.2014.223 2-s2.0-84911364368.","DOI":"10.1109\/CVPR.2014.223"},{"key":"e_1_2_10_21_2","doi-asserted-by":"crossref","unstructured":"TranD. BourdevL. andFergusR. Learning spatiotemporal features with 3D convolutional networks 2015 http:\/\/arxiv.org\/abs\/1412.0767.","DOI":"10.1109\/ICCV.2015.510"},{"key":"e_1_2_10_22_2","doi-asserted-by":"publisher","DOI":"10.1145\/3122865.3122867"},{"key":"e_1_2_10_23_2","unstructured":"SimonyanK.andZissermanA. Two-stream convolutional networks for action recognition in videos 2014 http:\/\/arxiv.org\/abs\/1406.2199."},{"key":"e_1_2_10_24_2","doi-asserted-by":"crossref","unstructured":"WangL. XiongY. andWangZ. Temporal segment networks: towards good practices for deep action recognition 2016 http:\/\/arxiv.org\/abs\/1608.00859.","DOI":"10.1007\/978-3-319-46484-8_2"},{"key":"e_1_2_10_25_2","doi-asserted-by":"crossref","unstructured":"CarreiraJ.andZissermanA. Quo vadis action recognition? a new model and the kinetics dataset 2018 http:\/\/arxiv.org\/abs\/1705.07750.","DOI":"10.1109\/CVPR.2017.502"},{"key":"e_1_2_10_26_2","doi-asserted-by":"crossref","unstructured":"WuZ. WangX. andJiangY.-G. Modeling spatial-temporal clues in a hybrid deep learning framework for video classification Proceedings of the 23rd ACM International Conference on Multimedia 2015 Brisbane Australia 461\u2013470 https:\/\/doi.org\/10.1145\/2733373.2806222 2-s2.0-84962921420.","DOI":"10.1145\/2733373.2806222"},{"key":"e_1_2_10_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/tpami.2016.2599174"},{"key":"e_1_2_10_28_2","unstructured":"SharmaS. KirosR. andSalakhutdinovR. Action recognition using visual attention 2016 http:\/\/arxiv.org\/abs\/1511.04119."},{"key":"e_1_2_10_29_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cviu.2017.10.011"},{"key":"e_1_2_10_30_2","first-page":"1","article-title":"Multi label classification: an overview","author":"Grigorios T.","year":"2007","journal-title":"International Journal of Data Warehousing and Mining"},{"key":"e_1_2_10_31_2","doi-asserted-by":"crossref","unstructured":"GharroudiO. ElghazelH. andAussemA. Ensemble multi-label classification: a comparative study on threshold selection and voting methods Proceedings of 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI) 2015 Vietri sul Mare Italy 377\u2013384 https:\/\/doi.org\/10.1109\/ictai.2015.64 2-s2.0-84963626492.","DOI":"10.1109\/ICTAI.2015.64"},{"key":"e_1_2_10_32_2","unstructured":"ReadJ.andPerez-CruzF. Deep learning for multi-label classification 2014 http:\/\/arxiv.org\/abs\/1502.05988."},{"key":"e_1_2_10_33_2","doi-asserted-by":"publisher","DOI":"10.1186\/s12859-017-1898-z"},{"key":"e_1_2_10_34_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-019-42294-8"},{"key":"e_1_2_10_35_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10489-017-1033-x"},{"key":"e_1_2_10_36_2","doi-asserted-by":"publisher","DOI":"10.3390\/app9061123"},{"key":"e_1_2_10_37_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-017-5532-x"}],"container-title":["Computational Intelligence and Neuroscience"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/cin\/2021\/6794202.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/cin\/2021\/6794202.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1155\/2021\/6794202","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,8,6]],"date-time":"2024-08-06T11:30:45Z","timestamp":1722943845000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1155\/2021\/6794202"}},"subtitle":[],"editor":[{"given":"Navid","family":"Razmjooy","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2021,1]]},"references-count":37,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2021,1]]}},"alternative-id":["10.1155\/2021\/6794202"],"URL":"https:\/\/doi.org\/10.1155\/2021\/6794202","archive":["Portico"],"relation":{},"ISSN":["1687-5265","1687-5273"],"issn-type":[{"type":"print","value":"1687-5265"},{"type":"electronic","value":"1687-5273"}],"subject":[],"published":{"date-parts":[[2021,1]]},"assertion":[{"value":"2021-07-29","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-10-21","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-11-11","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"6794202"}}