{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,26]],"date-time":"2025-11-26T16:40:35Z","timestamp":1764175235619,"version":"build-2065373602"},"reference-count":31,"publisher":"MDPI AG","issue":"18","license":[{"start":{"date-parts":[[2022,9,13]],"date-time":"2022-09-13T00:00:00Z","timestamp":1663027200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Natural Science Foundation of Shandong Province","award":["ZR2020QF108","ZR2020MF148","300102342511","62072391","62066013","62172351","2021KJ080"],"award-info":[{"award-number":["ZR2020QF108","ZR2020MF148","300102342511","62072391","62066013","62172351","2021KJ080"]}]},{"name":"Fundamental Research Funds for the Central Universities, CHD","award":["ZR2020QF108","ZR2020MF148","300102342511","62072391","62066013","62172351","2021KJ080"],"award-info":[{"award-number":["ZR2020QF108","ZR2020MF148","300102342511","62072391","62066013","62172351","2021KJ080"]}]},{"name":"National Natural Science Foundation of China","award":["ZR2020QF108","ZR2020MF148","300102342511","62072391","62066013","62172351","2021KJ080"],"award-info":[{"award-number":["ZR2020QF108","ZR2020MF148","300102342511","62072391","62066013","62172351","2021KJ080"]}]},{"name":"Youth Innovation Science and Technology Support Program of Shandong Province","award":["ZR2020QF108","ZR2020MF148","300102342511","62072391","62066013","62172351","2021KJ080"],"award-info":[{"award-number":["ZR2020QF108","ZR2020MF148","300102342511","62072391","62066013","62172351","2021KJ080"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Traffic signs detection and recognition is an essential and challenging task for driverless cars. However, the detection of traffic signs in most scenarios belongs to small target detection, and most existing object detection methods show poor performance in these cases, which increases the difficulty of detection. To further improve the accuracy of small object detection for traffic signs, this paper proposed an optimization strategy based on the YOLOv4 network. Firstly, an improved triplet attention mechanism was added to the backbone network. It was combined with optimized weights to make the network focus more on the acquisition of channel and spatial features. Secondly, a bidirectional feature pyramid network (BiFPN) was used in the neck network to enhance feature fusion, which can effectively improve the feature perception field of small objects. The improved model and some state-of-the-art (SOTA) methods were compared on the joint dataset TT100K-COCO. Experimental results show that the enhanced network can achieve 60.4% mAP(Mean Average Precision), surpassing the YOLOv4 by 8% with the same input size. With a larger input size, it can achieve a best performance capability of 66.4% mAP. This work provides a reference for research on obtaining higher accuracy for traffic sign detection in autonomous driving.<\/jats:p>","DOI":"10.3390\/s22186930","type":"journal-article","created":{"date-parts":[[2022,9,13]],"date-time":"2022-09-13T22:37:28Z","timestamp":1663108648000},"page":"6930","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":17,"title":["Real-Time and Efficient Multi-Scale Traffic Sign Detection Method for Driverless Cars"],"prefix":"10.3390","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7606-1411","authenticated-orcid":false,"given":"Xuan","family":"Wang","sequence":"first","affiliation":[{"name":"School of Computer and Control Engineering, Yantai University, Yantai 264005, China"}]},{"given":"Jian","family":"Guo","sequence":"additional","affiliation":[{"name":"School of Computer and Control Engineering, Yantai University, Yantai 264005, China"}]},{"given":"Jinglei","family":"Yi","sequence":"additional","affiliation":[{"name":"School of Computer and Control Engineering, Yantai University, Yantai 264005, China"}]},{"given":"Yongchao","family":"Song","sequence":"additional","affiliation":[{"name":"School of Computer and Control Engineering, Yantai University, Yantai 264005, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6688-5014","authenticated-orcid":false,"given":"Jindong","family":"Xu","sequence":"additional","affiliation":[{"name":"School of Computer and Control Engineering, Yantai University, Yantai 264005, China"}]},{"given":"Weiqing","family":"Yan","sequence":"additional","affiliation":[{"name":"School of Computer and Control Engineering, Yantai University, Yantai 264005, China"}]},{"given":"Xin","family":"Fu","sequence":"additional","affiliation":[{"name":"College of Transportation Engineering, Chang\u2019an University, Xi\u2019an 710064, China"},{"name":"Engineering Research Center of Highway Infrastructure Digitalization, Ministry of Education, Xi\u2019an 710064, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,9,13]]},"reference":[{"key":"ref_1","first-page":"1","article-title":"Convolutional deep belief networks on cifar-10","volume":"40","author":"Krizhevsky","year":"2010","journal-title":"Unpubl. Manuscr."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"84","DOI":"10.1145\/3065386","article-title":"Imagenet classification with deep convolutional neural networks","volume":"60","author":"Krizhevsky","year":"2017","journal-title":"Commun. ACM"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., and Fei-Fei, L. (2009, January 20\u201325). Imagenet: A large-scale hierarchical image database. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"ref_4","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015, January 7\u201312). Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"ref_6","unstructured":"Ren, S., Sun, J., He, K., and Zhang, X. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the CVPR, Las Vegas, NV, USA."},{"key":"ref_7","unstructured":"Tan, M., and Le, Q. (2019, January 9\u201315). Efficientnet: Rethinking model scaling for convolutional neural networks. Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27\u201330). You only look once: Unified, real-time object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.91"},{"key":"ref_9","unstructured":"Redmon, J., and Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv."},{"key":"ref_10","unstructured":"Bochkovskiy, A., Wang, C.Y., and Liao, H.Y.M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv."},{"key":"ref_11","unstructured":"Long, X., Deng, K., Wang, G., Zhang, Y., Dang, Q., Gao, Y., Shen, H., Ren, J., Han, S., and Ding, E. (2020). PP-YOLO: An effective and efficient implementation of object detector. arXiv."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., and Berg, A.C. (2016, January 11\u201314). Ssd: Single shot multibox detector. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014, January 23\u201328). Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA.","DOI":"10.1109\/CVPR.2014.81"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Girshick, R. (2015, January 7\u201313). Fast r-cnn. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.169"},{"key":"ref_15","unstructured":"Ren, S., He, K., Girshick, R., and Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst., 28."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"He, K., Gkioxari, G., Doll\u00e1r, P., and Girshick, R. (2017, January 22\u201329). Mask r-cnn. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.322"},{"key":"ref_17","unstructured":"Tian, Z., Shen, C., Chen, H., and He, T. (November, January 27). Fcos: Fully convolutional one-stage object detection. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Zhang, S., Chi, C., Yao, Y., Lei, Z., and Li, S.Z. (2020, January 13\u201319). Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00978"},{"key":"ref_19","unstructured":"Zhou, X., Wang, D., and Kr\u00e4henb\u00fchl, P. (2019). Objects as points. arXiv."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201323). Squeeze-and-excitation networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Woo, S., Park, J., Lee, J.Y., and Kweon, I.S. (2018, January 8\u201314). Cbam: Convolutional block attention module. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"ref_22","unstructured":"Lee, H., Kim, H.E., and Nam, H. (November, January 27). Srm: A style-based recalibration module for convolutional neural networks. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Misra, D., Nalamada, T., Arasanipalai, A.U., and Hou, Q. (2021, January 3\u20138). Rotate to attend: Convolutional triplet attention module. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA.","DOI":"10.1109\/WACV48630.2021.00318"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Singh, B., and Davis, L.S. (2018, January 18\u201323). An analysis of scale invariance in object detection snip. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00377"},{"key":"ref_25","unstructured":"Singh, B., Najibi, M., and Davis, L.S. (2018). Sniper: Efficient multi-scale training. Adv. Neural Inf. Process. Syst., 31."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Doll\u00e1r, P., Girshick, R., He, K., Hariharan, B., and Belongie, S. (2017, January 21\u201326). Feature pyramid networks for object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.106"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Liu, S., Qi, L., Qin, H., Shi, J., and Jia, J. (2018, January 18\u201323). Path aggregation network for instance segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00913"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Tan, M., Pang, R., and Le, Q.V. (2020, January 13\u201319). Efficientdet: Scalable and efficient object detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01079"},{"key":"ref_29","unstructured":"Li, Y., Chen, Y., Wang, N., and Zhang, Z. (November, January 27). Scale-aware trident networks for object detection. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Zhu, Z., Liang, D., Zhang, S., Huang, X., Li, B., and Hu, S. (2016, January 27\u201330). Traffic-sign detection and classification in the wild. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.232"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., and Zitnick, C.L. (2014, January 6\u201312). Microsoft coco: Common objects in context. Proceedings of the European Conference on Computer Vision, Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10602-1_48"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/18\/6930\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:30:49Z","timestamp":1760142649000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/18\/6930"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,13]]},"references-count":31,"journal-issue":{"issue":"18","published-online":{"date-parts":[[2022,9]]}},"alternative-id":["s22186930"],"URL":"https:\/\/doi.org\/10.3390\/s22186930","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2022,9,13]]}}}