{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,9]],"date-time":"2026-05-09T17:09:56Z","timestamp":1778346596711,"version":"3.51.4"},"reference-count":27,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,3,17]],"date-time":"2025-03-17T00:00:00Z","timestamp":1742169600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,3,17]],"date-time":"2025-03-17T00:00:00Z","timestamp":1742169600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ind. Artif. Intell."],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Traffic sign recognition is crucial for intelligent transportation and autonomous driving, ensuring road safety and efficient traffic management. This paper proposes a lightweight enhanced MobileViT model (E-MobileViT). It is based on the MobileViT model, combining the advantages of CNN and Transformer. We integrate Efficient Local Attention (ELA) and Convolutional Block Attention Module (CBAM) mechanisms in the model to improve feature extraction. The proposed model improves the feature fusion structure and significantly reduces the number of model parameters. We evaluated the model on the German Traffic Sign Recognition Benchmark (GTSRB), Belgian Traffic Signs Database (BTSD), and China Traffic Signs Database (TSRD) datasets and its accuracy reaches 99.61%, 99.26% and 97.34%, respectively, which outperforms traditional and advanced models. We confirmed the key role of ELA and CBAM mechanisms through ablation experiments. With fewer parameters than mainstream models, our E-MobileViT model is suitable for resource-constrained environments such as mobile devices, providing a balanced solution for traffic sign recognition tasks.<\/jats:p>","DOI":"10.1007\/s44244-025-00024-2","type":"journal-article","created":{"date-parts":[[2025,3,17]],"date-time":"2025-03-17T13:19:28Z","timestamp":1742217568000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":13,"title":["E-MobileViT: a lightweight model for traffic sign recognition"],"prefix":"10.1007","volume":"3","author":[{"given":"Shiqi","family":"Song","sequence":"first","affiliation":[]},{"given":"Xinfeng","family":"Ye","sequence":"additional","affiliation":[]},{"given":"Sathiamoorthy","family":"Manoharan","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,3,17]]},"reference":[{"key":"24_CR1","unstructured":"Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861"},{"key":"24_CR2","unstructured":"Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S et al (2020) An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929"},{"key":"24_CR3","unstructured":"Mehta S, Rastegari M (2021) Mobilevit: light-weight, general-purpose, and mobile-friendly vision transformer. arXiv preprint arXiv:2110.02178"},{"key":"24_CR4","doi-asserted-by":"crossref","unstructured":"Woo S, Park J, Lee J-Y, Kweon IS (2018) Cbam: convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp 3\u201319","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"24_CR5","unstructured":"Xu W, Wan Y (2024) Ela: efficient local attention for deep convolutional neural networks. arXiv preprint arXiv:2403.01123"},{"issue":"7","key":"24_CR6","doi-asserted-by":"publisher","first-page":"203","DOI":"10.12700\/APH.21.7.2024.7.12","volume":"21","author":"C Ferencz","year":"2024","unstructured":"Ferencz C, Z\u00f6ldy M (2024) Neural network-based multi-class traffic-sign classification with the German traffic sign recognition benchmark. Acta Polytech Hung 21(7):203\u2013220","journal-title":"Acta Polytech Hung"},{"key":"24_CR7","doi-asserted-by":"publisher","DOI":"10.3390\/electronics13020306","author":"MA Khan","year":"2024","unstructured":"Khan MA, Park H (2024) Exploring explainable artificial intelligence techniques for interpretable neural networks in traffic sign recognition systems. Electronics. https:\/\/doi.org\/10.3390\/electronics13020306","journal-title":"Electronics"},{"issue":"2","key":"24_CR8","doi-asserted-by":"publisher","first-page":"33","DOI":"10.3390\/jsan12020033","volume":"12","author":"XR Lim","year":"2023","unstructured":"Lim XR, Lee CP, Lim KM, Ong TS (2023) Enhanced traffic sign recognition with ensemble learning. J Sens Actuator Netw 12(2):33","journal-title":"J Sens Actuator Netw"},{"issue":"8","key":"24_CR9","doi-asserted-by":"publisher","first-page":"6085","DOI":"10.1007\/s00521-021-06762-5","volume":"34","author":"R Abdel-Salam","year":"2022","unstructured":"Abdel-Salam R, Mostafa R, Abdel-Gawad AH (2022) RIECNN: real-time image enhanced CNN for traffic sign recognition. Neural Comput Appl 34(8):6085\u20136096","journal-title":"Neural Comput Appl"},{"key":"24_CR10","doi-asserted-by":"publisher","DOI":"10.1016\/j.heliyon.2024.e26182","author":"W Wei","year":"2024","unstructured":"Wei W, Zhang L, Yang K, Li J, Cui N, Han Y, Zhang N, Yang X, Tan H, Wang K (2024) A lightweight network for traffic sign recognition based on multi-scale feature and attention mechanism. Heliyon. https:\/\/doi.org\/10.1016\/j.heliyon.2024.e26182","journal-title":"Heliyon"},{"key":"24_CR11","doi-asserted-by":"publisher","DOI":"10.3390\/electronics12081802","author":"MA Khan","year":"2023","unstructured":"Khan MA, Park H, Chae J (2023) A lightweight convolutional neural network (CNN) architecture for traffic sign recognition in urban road networks. Electronics. https:\/\/doi.org\/10.3390\/electronics12081802","journal-title":"Electronics"},{"issue":"05","key":"24_CR12","first-page":"88","volume":"2","author":"BB Mamatkulovich","year":"2022","unstructured":"Mamatkulovich BB (2022) Lightweight residual layers based convolutional neural networks for traffic sign recognition. Eur Int J Multidiscip Res Manag Stud 2(05):88\u201394","journal-title":"Eur Int J Multidiscip Res Manag Stud"},{"key":"24_CR13","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2020.114481","volume":"168","author":"WA Haque","year":"2021","unstructured":"Haque WA, Arefin S, Shihavuddin A, Hasan MA (2021) DeepThin: a novel lightweight CNN architecture for traffic sign recognition without GPU requirements. Expert Syst Appl 168:114481","journal-title":"Expert Syst Appl"},{"key":"24_CR14","unstructured":"Mingwin S, Shisu Y, Wanwag Y, Huing S (2024) Revolutionizing traffic sign recognition: Unveiling the potential of vision transformers. arXiv preprint arXiv:2404.19066"},{"key":"24_CR15","doi-asserted-by":"crossref","unstructured":"Manzari ON, Boudesh A, Shokouhi SB (2022) Pyramid transformer for traffic sign detection. In: 2022 12th International Conference on Computer and Knowledge Engineering (ICCKE), IEEE, pp 112\u2013116","DOI":"10.1109\/ICCKE57176.2022.9960090"},{"key":"24_CR16","doi-asserted-by":"crossref","unstructured":"Zhang J, He H, Li W, Kuang L, Yu F, Zhao J (2023) Improving KIII model and its application intraffic sign recognition","DOI":"10.21203\/rs.3.rs-3146726\/v1"},{"issue":"7","key":"24_CR17","doi-asserted-by":"publisher","first-page":"8135","DOI":"10.1007\/s12652-021-03584-0","volume":"14","author":"C Dewi","year":"2023","unstructured":"Dewi C, Chen R-C, Yu H, Jiang X (2023) Robust detection method for improving small traffic sign recognition based on spatial pyramid pooling. J Ambient Intell Hum Comput 14(7):8135\u20138152","journal-title":"J Ambient Intell Hum Comput"},{"issue":"9","key":"24_CR18","doi-asserted-by":"publisher","first-page":"15794","DOI":"10.1109\/TITS.2022.3145467","volume":"23","author":"W Min","year":"2022","unstructured":"Min W, Liu R, He D, Han Q, Wei Q, Wang Q (2022) Traffic sign recognition based on semantic scene understanding and structural traffic sign location. IEEE Trans Intell Transp Syst 23(9):15794\u201315807","journal-title":"IEEE Trans Intell Transp Syst"},{"key":"24_CR19","doi-asserted-by":"crossref","unstructured":"Stallkamp J, Schlipsing M, Salmen J, Igel C (2011) The German traffic sign recognition benchmark: a multi-class classification competition. In: The 2011 International Joint Conference on Neural Networks, IEEE, pp 1453\u20131460","DOI":"10.1109\/IJCNN.2011.6033395"},{"key":"24_CR20","doi-asserted-by":"crossref","unstructured":"Mathias M, Timofte R, Benenson R, Van\u00a0Gool L (2013) Traffic sign recognition-how far are we from the solution? In: The 2013 International Joint Conference on Neural Networks (IJCNN), IEEE, pp 1\u20138","DOI":"10.1109\/IJCNN.2013.6707049"},{"key":"24_CR21","unstructured":"Chinese Traffic Sign Database (n.d.). http:\/\/www.nlpr.ia.ac.cn\/pal\/trafficdata\/index.html. Accessed 27 Feb 2025."},{"key":"24_CR22","doi-asserted-by":"publisher","first-page":"95","DOI":"10.1016\/j.comnet.2018.02.026","volume":"136","author":"T Yang","year":"2018","unstructured":"Yang T, Long X, Sangaiah AK, Zheng Z, Tong C (2018) Deep detection network for real-life traffic sign in vehicular networks. Computer Netw 136:95\u2013104. https:\/\/doi.org\/10.1016\/j.comnet.2018.02.026","journal-title":"Computer Netw"},{"key":"24_CR23","doi-asserted-by":"publisher","first-page":"193","DOI":"10.1007\/s13735-017-0129-8","volume":"6","author":"Y Saadna","year":"2017","unstructured":"Saadna Y, Behloul A (2017) An overview of traffic sign detection and classification methods. Int J Multimed Inform Retr 6:193\u2013210","journal-title":"Int J Multimed Inform Retr"},{"key":"24_CR24","doi-asserted-by":"crossref","unstructured":"Forman G (2004) A pitfall and solution in multi-class feature selection for text classification. In: Proceedings of the Twenty-first International Conference on Machine Learning, p 38","DOI":"10.1145\/1015330.1015356"},{"key":"24_CR25","doi-asserted-by":"publisher","first-page":"429","DOI":"10.1016\/j.ins.2019.11.004","volume":"513","author":"F Thabtah","year":"2020","unstructured":"Thabtah F, Hammoud S, Kamalov F, Gonsalves A (2020) Data imbalance in classification: experimental evaluation. Inform Sci 513:429\u2013441. https:\/\/doi.org\/10.1016\/j.ins.2019.11.004","journal-title":"Inform Sci"},{"key":"24_CR26","doi-asserted-by":"crossref","unstructured":"Hou Q, Zhou D, Feng J (2021) Coordinate attention for efficient mobile network design. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 13713\u201313722","DOI":"10.1109\/CVPR46437.2021.01350"},{"key":"24_CR27","doi-asserted-by":"crossref","unstructured":"Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 7132\u20137141","DOI":"10.1109\/CVPR.2018.00745"}],"container-title":["Industrial Artificial Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44244-025-00024-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s44244-025-00024-2\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44244-025-00024-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,3,17]],"date-time":"2025-03-17T13:19:41Z","timestamp":1742217581000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s44244-025-00024-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,17]]},"references-count":27,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["24"],"URL":"https:\/\/doi.org\/10.1007\/s44244-025-00024-2","relation":{},"ISSN":["2731-667X"],"issn-type":[{"value":"2731-667X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,3,17]]},"assertion":[{"value":"1 July 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"21 February 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 March 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"3"}}