{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,16]],"date-time":"2026-03-16T19:13:53Z","timestamp":1773688433014,"version":"3.50.1"},"reference-count":44,"publisher":"MDPI AG","issue":"15","license":[{"start":{"date-parts":[[2023,8,7]],"date-time":"2023-08-07T00:00:00Z","timestamp":1691366400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Natural Science Foundation of China","award":["62103258"],"award-info":[{"award-number":["62103258"]}]},{"name":"National Natural Science Foundation of China","award":["2021YFC2801001"],"award-info":[{"award-number":["2021YFC2801001"]}]},{"name":"National Natural Science Foundation of China","award":["22PJD029"],"award-info":[{"award-number":["22PJD029"]}]},{"name":"National Natural Science Foundation of China","award":["21YF1416700"],"award-info":[{"award-number":["21YF1416700"]}]},{"name":"National Key Research and Development Program of China","award":["62103258"],"award-info":[{"award-number":["62103258"]}]},{"name":"National Key Research and Development Program of China","award":["2021YFC2801001"],"award-info":[{"award-number":["2021YFC2801001"]}]},{"name":"National Key Research and Development Program of China","award":["22PJD029"],"award-info":[{"award-number":["22PJD029"]}]},{"name":"National Key Research and Development Program of China","award":["21YF1416700"],"award-info":[{"award-number":["21YF1416700"]}]},{"name":"Shanghai Pujiang Program","award":["62103258"],"award-info":[{"award-number":["62103258"]}]},{"name":"Shanghai Pujiang Program","award":["2021YFC2801001"],"award-info":[{"award-number":["2021YFC2801001"]}]},{"name":"Shanghai Pujiang Program","award":["22PJD029"],"award-info":[{"award-number":["22PJD029"]}]},{"name":"Shanghai Pujiang Program","award":["21YF1416700"],"award-info":[{"award-number":["21YF1416700"]}]},{"name":"Shanghai Yangfan Program","award":["62103258"],"award-info":[{"award-number":["62103258"]}]},{"name":"Shanghai Yangfan Program","award":["2021YFC2801001"],"award-info":[{"award-number":["2021YFC2801001"]}]},{"name":"Shanghai Yangfan Program","award":["22PJD029"],"award-info":[{"award-number":["22PJD029"]}]},{"name":"Shanghai Yangfan Program","award":["21YF1416700"],"award-info":[{"award-number":["21YF1416700"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Ship classification, as an important problem in the field of computer vision, has been the focus of research for various algorithms over the past few decades. In particular, convolutional neural networks (CNNs) have become one of the most popular models for ship classification tasks, especially using deep learning methods. Currently, several classical methods have used single-scale features to tackle ship classification, without paying much attention to the impact of multiscale features. Therefore, this paper proposes a multiscale feature fusion ship classification method based on evidence theory. In this method, multiple scales of features were utilized to fuse the feature maps of three different sizes (40 \u00d7 40 \u00d7 256, 20 \u00d7 20 \u00d7 512, and 10 \u00d7 10 \u00d7 1024), which were used to perform ship classification tasks separately. Finally, the multiscales-based classification results were treated as pieces of evidence and fused at the decision level using evidence theory to obtain the final classification result. Experimental results demonstrate that, compared to classical classification networks, this method can effectively improve classification accuracy.<\/jats:p>","DOI":"10.3390\/rs15153916","type":"journal-article","created":{"date-parts":[[2023,8,8]],"date-time":"2023-08-08T12:38:59Z","timestamp":1691498339000},"page":"3916","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["E-FPN: Evidential Feature Pyramid Network for Ship Classification"],"prefix":"10.3390","volume":"15","author":[{"given":"Yilin","family":"Dong","sequence":"first","affiliation":[{"name":"College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China"}]},{"given":"Kunhai","family":"Xu","sequence":"additional","affiliation":[{"name":"College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China"}]},{"given":"Changming","family":"Zhu","sequence":"additional","affiliation":[{"name":"College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9135-5661","authenticated-orcid":false,"given":"Enguang","family":"Guan","sequence":"additional","affiliation":[{"name":"College of Logistics Engineering, Shanghai Maritime University, Shanghai 201306, China"}]},{"given":"Yihai","family":"Liu","sequence":"additional","affiliation":[{"name":"Jiangsu Automation Research Institute, Lianyungang 222061, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,8,7]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"305","DOI":"10.1016\/j.neunet.2022.10.023","article-title":"Improved Residual Network based on norm-preservation for visual recognition","volume":"157","author":"Mahaur","year":"2023","journal-title":"Neural Netw."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"582","DOI":"10.1007\/s10278-019-00227-x","article-title":"Deep learning techniques for medical image segmentation: Achievements and challenges","volume":"32","author":"Hesamian","year":"2019","journal-title":"J. Digit. Imaging"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"570","DOI":"10.3390\/mi14030570","article-title":"Artificial intelligence-based smart quality inspection for manufacturing","volume":"14","author":"Sundaram","year":"2023","journal-title":"Micromachines"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Azizah, L.M., Umayah, S.F., Riyadi, S., Damarjati, C., and Utama, N.A. (2017, January 24\u201326). Deep learning implementation using convolutional neural network in mangosteen surface defect detection. Proceedings of the 2017 7th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Penang, Malaysia.","DOI":"10.1109\/ICCSCE.2017.8284412"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"2004","DOI":"10.3390\/rs15082004","article-title":"Rice Yield Prediction in Different Growth Environments Using Unmanned Aerial Vehicle-Based Hyperspectral Imaging","volume":"15","author":"Kurihara","year":"2023","journal-title":"Remote Sens."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1979","DOI":"10.3390\/rs15081979","article-title":"Quantitative Evaluation of Maize Emergence Using UAV Imagery and Deep Learning","volume":"15","author":"Liu","year":"2023","journal-title":"Remote Sens."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"108245","DOI":"10.1016\/j.patcog.2021.108245","article-title":"Towards automatic threat detection: A survey of advances of deep learning within X-ray security imaging","volume":"122","author":"Akcay","year":"2022","journal-title":"Pattern Recognit."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"331","DOI":"10.3390\/electronics8030331","article-title":"Learning to see the hidden part of the vehicle in the autopilot scene","volume":"8","author":"Xu","year":"2019","journal-title":"Electronics"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1773","DOI":"10.3390\/rs15071773","article-title":"P2FEViT: Plug-and-Play CNN Feature Embedded Hybrid Vision Transformer for Remote Sensing Image Classification","volume":"15","author":"Wang","year":"2023","journal-title":"Remote Sens."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"2215","DOI":"10.3390\/rs14092215","article-title":"Attention mechanism and depthwise separable convolution aided 3DCNN for hyperspectral remote sensing image classification","volume":"14","author":"Li","year":"2022","journal-title":"Remote Sens."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"1758","DOI":"10.3390\/rs15071758","article-title":"Multi-Scale Spectral-Spatial Attention Network for Hyperspectral Image Classification Combining 2D Octave and 3D Convolutional Neural Networks","volume":"15","author":"Liang","year":"2023","journal-title":"Remote Sens."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"545","DOI":"10.3390\/rs14030545","article-title":"Remote sensing scene image classification based on self-compensating convolution neural network","volume":"14","author":"Shi","year":"2022","journal-title":"Remote Sens."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"109305","DOI":"10.1016\/j.patcog.2023.109305","article-title":"Granularity-Aware Distillation and Structure Modeling Region Proposal Network for Fine-Grained Image Classification","volume":"137","author":"Ke","year":"2023","journal-title":"Pattern Recognit."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"306","DOI":"10.1016\/j.neunet.2023.01.050","article-title":"Feature relocation network for fine-grained image classification","volume":"161","author":"Zhao","year":"2023","journal-title":"Neural Netw."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Chen, L., Shi, W., and Deng, D. (2021). Improved YOLOv3 based on attention mechanism for fast and accurate ship detection in optical remote sensing images. Remote Sens., 13.","DOI":"10.3390\/rs13040660"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"145","DOI":"10.1016\/j.cja.2020.12.013","article-title":"Ship detection and classification from optical remote sensing images: A survey","volume":"34","author":"Li","year":"2021","journal-title":"Chin. J. Aeronaut."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Dong, Y., Chen, F., Han, S., and Liu, H. (2021). Ship object detection of remote sensing image based on visual attention. Remote Sens., 13.","DOI":"10.3390\/rs13163192"},{"key":"ref_18","first-page":"5210322","article-title":"HOG-ShipCLSNet: A novel deep learning network with hog feature fusion for SAR ship classification","volume":"60","author":"Zhang","year":"2021","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"5620314","DOI":"10.1109\/TGRS.2022.3162195","article-title":"An explainable attention network for fine-grained ship classification using remote-sensing images","volume":"60","author":"Xiong","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"9722","DOI":"10.1109\/JSTARS.2022.3220503","article-title":"Multigranularity Self-Attention Network for Fine-Grained Ship Detection in Remote Sensing Images","volume":"15","author":"Ouyang","year":"2022","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_21","first-page":"20","article-title":"Cross-modal knowledge distillation in deep networks for SAR image classification","volume":"Volume 12099","author":"Jahan","year":"2022","journal-title":"Proceedings of the Geospatial Informatics XII"},{"key":"ref_22","unstructured":"Krizhevsky, A., Sutskever, I., and Hinton, G.E. (2012, January 3\u20136). Imagenet classification with deep convolutional neural networks. Proceedings of the Advances in Neural Information Processing Systems 25 (NIPS 2012), Lake Tahoe, NV, USA."},{"key":"ref_23","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_24","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (July, January 26). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Xie, S., Girshick, R., Doll\u00e1r, P., Tu, Z., and He, K. (2017, January 21\u201326). Aggregated residual transformations for deep neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.634"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K.Q. (2017, January 21\u201326). Densely connected convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.243"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., RoyChowdhury, A., and Maji, S. (2015, January 7\u201313). Bilinear CNN models for fine-grained visual recognition. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.170"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Fu, J., Zheng, H., and Mei, T. (2017, January 21\u201326). Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.476"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"209","DOI":"10.1016\/j.inffus.2022.12.025","article-title":"An evidential combination method with multi-color spaces for remote sensing image scene classification","volume":"93","author":"Huang","year":"2023","journal-title":"Inf. Fusion"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Chen, Y., Bai, Y., Zhang, W., and Mei, T. (2019, January 15\u201320). Destruction and construction learning for fine-grained image recognition. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00530"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Zheng, H., Fu, J., Zha, Z.J., and Luo, J. (2019, January 15\u201320). Looking for the devil in the details: Learning trilinear attention sampling network for fine-grained image recognition. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00515"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Chen, C.F.R., Fan, Q., and Panda, R. (2021, January 11\u201317). Crossvit: Cross-attention multi-scale vision transformer for image classification. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.00041"},{"key":"ref_33","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16 \u00d7 16 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 11\u201317). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_35","first-page":"4707916","article-title":"Contrastive learning for fine-grained ship classification in remote sensing images","volume":"60","author":"Chen","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"1914","DOI":"10.1109\/JSTARS.2023.3241969","article-title":"Fine-Grained Ship Detection in High-Resolution Satellite Images With Shape-Aware Feature Learning","volume":"16","author":"Guo","year":"2023","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_37","first-page":"1527","article-title":"Fine-grained ship image recognition based on BCNN with inception and AM-Softmax","volume":"73","author":"Zhang","year":"2022","journal-title":"Comput. Mater. Contin."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Jahan, C.S., Savakis, A., and Blasch, E. (2022, January 26\u201329). Sar image classification with knowledge distillation and class balancing for long-tailed distributions. Proceedings of the 2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), Nafplio, Greece.","DOI":"10.1109\/IVMSP54334.2022.9816201"},{"key":"ref_39","unstructured":"Redmon, J., and Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Doll\u00e1r, P., Girshick, R., He, K., Hariharan, B., and Belongie, S. (2017, January 21\u201326). Feature pyramid networks for object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.106"},{"key":"ref_41","first-page":"330","article-title":"Dempster-shafer theory","volume":"1","author":"Shafer","year":"1992","journal-title":"Encycl. Artif. Intell."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"513","DOI":"10.1109\/TR.2018.2800014","article-title":"Multisensor fault diagnosis modeling based on the evidence theory","volume":"67","author":"Lin","year":"2018","journal-title":"IEEE Trans. Reliab."},{"key":"ref_43","first-page":"48","article-title":"Improvement of proportional conflict redistribution rules of combination of basic belief assignments","volume":"16","author":"Dezert","year":"2021","journal-title":"J. Adv. Inf. Fusion (JAIF)"},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"1605","DOI":"10.1109\/TCYB.2017.2710205","article-title":"Classifier fusion with contextual reliability evaluation","volume":"48","author":"Liu","year":"2017","journal-title":"IEEE Trans. Cybern."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/15\/3916\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T20:27:36Z","timestamp":1760128056000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/15\/3916"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,7]]},"references-count":44,"journal-issue":{"issue":"15","published-online":{"date-parts":[[2023,8]]}},"alternative-id":["rs15153916"],"URL":"https:\/\/doi.org\/10.3390\/rs15153916","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,8,7]]}}}