{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,6]],"date-time":"2026-03-06T17:33:51Z","timestamp":1772818431913,"version":"3.50.1"},"reference-count":25,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2026,2,5]],"date-time":"2026-02-05T00:00:00Z","timestamp":1770249600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2026,3,6]],"date-time":"2026-03-06T00:00:00Z","timestamp":1772755200000},"content-version":"vor","delay-in-days":29,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Manipal Academy of Higher Education, Manipal"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Discov Artif Intell"],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>The ability of vehicles to detect obstacles on the road is a critical component in advancing autonomous driving systems. Driving involves complex perception and decision-making, which is a challenge for humans and automated systems. In this work, we present and evaluate two object detection models for identifying various road entities, including cars, trucks, pedestrians, and other obstacles. The first model is a modified Region-based Convolutional Neural Network (R-CNN), and the second is a single-stage detector based on the EfficientDet-D0 architecture. In this work, R-CNN uses VGG-16 as its base CNN model for feature extraction. The combination provides strong representational power by utilizing VGG-16 with structured region-based detection from R-CNN, enabling accurate obstacle classification and localization. The R-CNN model was enhanced with architectural modifications tailored for two-stage detection using hybrid fully connected layers (FCL). In contrast, the EfficientDet-D0 model was trained using transfer learning on the Udacity self-driving car dataset. The EfficientDet-D0 model demonstrated superior performance in real-time conditions, reporting a detection accuracy of 76.8% mAP@0.5, an IoU of 0.73, and a processing speed of 30 frames per second (FPS). In contrast, the custom R-CNN model achieved 69.3% mAP@0.5, with a notable processing rate of 32 FPS, making it suitable for real-time deployment. Despite the promising results, certain obstacle categories remain inadequately detected at high vehicle speeds. Our model detects road obstacles in real-time, achieving both low latency and high accuracy. We present the accuracy and loss metrics for both models to provide a detailed analysis of their performance compared to baseline methods. The ablation study demonstrated that transfer learning significantly enhanced model performance. The EfficientDet-D0 model\u2019s detection accuracy dropped by over 14% without transfer learning.<\/jats:p>","DOI":"10.1007\/s44163-026-00871-7","type":"journal-article","created":{"date-parts":[[2026,2,5]],"date-time":"2026-02-05T10:05:52Z","timestamp":1770285952000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Real-time traffic obstacle detection using hybrid FCL for R-CNN and EfficientDet with transfer learning"],"prefix":"10.1007","volume":"6","author":[{"given":"K.","family":"Veningston","sequence":"first","affiliation":[]},{"given":"M.","family":"Ronalda","sequence":"additional","affiliation":[]},{"given":"R.","family":"Sathiyaraj","sequence":"additional","affiliation":[]},{"given":"C.","family":"Selvan","sequence":"additional","affiliation":[]},{"given":"P. V.","family":"Venkateswara Rao","sequence":"additional","affiliation":[]},{"given":"Janardhan","family":"Karravula","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2026,2,5]]},"reference":[{"key":"871_CR1","unstructured":"Goodfellow I, Bengio Y, Courville A. Deep Learning. MIT Press; 2016. pp. 227\u2013239. http:\/\/www.deeplearningbook.org."},{"key":"871_CR2","unstructured":"Redmon J, et al. You only look once: unified, real-time object detection. Tech. Rep. 2016. arXiv: 1506.02640v5. https:\/\/pjreddie.com\/darknet\/yolov1."},{"key":"871_CR3","doi-asserted-by":"publisher","unstructured":"Tan M, Pang R, Le QV. EfficientDet: scalable and efficient object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. 2020. https:\/\/doi.org\/10.48550\/arXiv.1911.09070.","DOI":"10.48550\/arXiv.1911.09070"},{"key":"871_CR4","doi-asserted-by":"crossref","unstructured":"Janai J, G\u00fcney F, Behl A, Geiger A. Computer vision for autonomous vehicles: problems, datasets and state of the art. Foundations Trends\u00ae Comput Graph Vision. 2020;12(1\u20133):1\u2013308.","DOI":"10.1561\/0600000079"},{"key":"871_CR5","unstructured":"Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (ICRL). 2015. pp. 1\u201314. arXiv: 1409.1556."},{"key":"871_CR6","unstructured":"He K, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. arXiv: 1512.03385."},{"key":"871_CR7","doi-asserted-by":"crossref","unstructured":"Everingham M, et al. The PASCAL visual object classes (VOC) challenge. 2009. http:\/\/host.robots.ox.ac.uk\/pascal\/VOC\/pubs\/everingham10.pdf.","DOI":"10.1007\/s11263-009-0275-4"},{"key":"871_CR8","unstructured":"Ross B, Girshick. Fast R-CNN. In: CoRR abs\/1504.08083. 2015. arXiv: 1504. 08083."},{"key":"871_CR9","doi-asserted-by":"crossref","unstructured":"Chung H, Lee S, Park J. Deep neural network using trainable activation functions. 2016. pp. 348\u2013352.","DOI":"10.1109\/IJCNN.2016.7727219"},{"key":"871_CR10","unstructured":"Fei-Fei LI, Johnson J, Yeung S. Stanford online course: convolutional neural networks for visual recognition. Lecture 6\u2014Training Neural Networks I. Stanford Vision and learning lab, 2018."},{"key":"871_CR11","unstructured":"Ramachandran P, Zoph B, Le QV. Searching for Activation Functions. 2017. arXiv: 1710.05941v2"},{"key":"871_CR12","first-page":"19","volume":"1","author":"C Hennig","year":"2007","unstructured":"Hennig C, Kutlukaya M. Some thoughts about the design of loss functions. Tech Rep. 2007;1:19\u201339.","journal-title":"Tech Rep"},{"key":"871_CR13","doi-asserted-by":"crossref","unstructured":"Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. 1986.","DOI":"10.1038\/323533a0"},{"key":"871_CR14","unstructured":"Zang E. Udacity self-driving dataset https:\/\/www.kaggle.com\/datasets\/sshikamaru\/udacity-self-driving-car-dataset."},{"key":"871_CR15","unstructured":"Tan M, Le QV. EfficientNet: rethinking model scaling for convolutional neural networks. arXiv:1905.11946v5, Sep 2020."},{"key":"871_CR16","doi-asserted-by":"publisher","unstructured":"Castillo J, et al. A real-time traffic monitoring system based on YOLOv8 for vehicle detection and classification. In: 2024 IEEE International Conference on Green Energy and Smart Systems (GESS), Long Beach, CA, USA, 2024, pp. 1\u20136, https:\/\/doi.org\/10.1109\/GESS63533.2024.10784465.","DOI":"10.1109\/GESS63533.2024.10784465"},{"key":"871_CR17","doi-asserted-by":"publisher","unstructured":"Zheng Z, Hosseini A, Chen D, Shoghli O, Heydarian A. Real-time roadway obstacle detection for electric scooters using deep learning and multi-sensor fusion. 2025. https:\/\/doi.org\/10.48550\/arXiv.2504.03171.","DOI":"10.48550\/arXiv.2504.03171"},{"issue":"13","key":"871_CR18","doi-asserted-by":"publisher","DOI":"10.3390\/s24134407","volume":"24","author":"S Shi","year":"2024","unstructured":"Shi S, Ni J, Kong X, Zhu H, Zhan J, Sun Q, et al. An obstacle detection method based on longitudinal active vision. Sensors. 2024;24(13):4407. https:\/\/doi.org\/10.3390\/s24134407.","journal-title":"Sensors"},{"key":"871_CR19","doi-asserted-by":"publisher","first-page":"6164","DOI":"10.1038\/s41598-025-89785-5","volume":"15","author":"H Xu","year":"2025","unstructured":"Xu H. Encompass obstacle image detection method based on u-v disparity map and RANSAC algorithm. Sci Rep. 2025;15:6164. https:\/\/doi.org\/10.1038\/s41598-025-89785-5.","journal-title":"Sci Rep"},{"key":"871_CR20","doi-asserted-by":"publisher","first-page":"68","DOI":"10.1016\/j.trpro.2025.03.135","volume":"85","author":"QNH Minh","year":"2025","unstructured":"Minh QNH, Dinh NN, Ho LV, Huu CP. Real-time traffic accident detection using YOLOv8. Transp Res Procedia. 2025;85:68\u201375. https:\/\/doi.org\/10.1016\/j.trpro.2025.03.135.","journal-title":"Transp Res Procedia"},{"key":"871_CR21","doi-asserted-by":"publisher","first-page":"108458","DOI":"10.1016\/j.engappai.2024.108458","volume":"133","author":"Y Sun","year":"2024","unstructured":"Sun Y, Sun Z, Chen W. The evolution of object detection methods. Eng Appl Artif Intell. 2024;133:108458. https:\/\/doi.org\/10.1016\/j.engappai.2024.108458.","journal-title":"Eng Appl Artif Intell"},{"key":"871_CR22","doi-asserted-by":"publisher","unstructured":"He Z, Zhang L. Domain adaptive object detection via asymmetric tri-way faster-RCNN. In: Computer Vision\u2014ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part XXIV. Springer-Verlag, Berlin, Heidelberg, 309\u2013324. https:\/\/doi.org\/10.1007\/978-3-030-58586-0_19.","DOI":"10.1007\/978-3-030-58586-0_19"},{"key":"871_CR23","doi-asserted-by":"publisher","unstructured":"Xu M, Qin L, Chen W, Pu S, Zhang L. Multi-view adversarial discriminator: mine the non-causal factors for object detection in unseen domains. In Proc. 2023 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 8103\u20138112. https:\/\/doi.org\/10.1109\/CVPR52729.2023.00783.","DOI":"10.1109\/CVPR52729.2023.00783"},{"key":"871_CR24","doi-asserted-by":"publisher","first-page":"4868","DOI":"10.1109\/TIP.2023.3306915","volume":"32","author":"L Zhang","year":"2023","unstructured":"Zhang L, Qin L, Xu M, Chen W, Pu S, Zhang W. Randomized spectrum transformations for adapting object detector in unseen domains. IEEE Trans Image Process. 2023;32:4868\u201379. https:\/\/doi.org\/10.1109\/TIP.2023.3306915.","journal-title":"IEEE Trans Image Process"},{"key":"871_CR25","doi-asserted-by":"publisher","unstructured":"He Z, Zhang L, Gao X, Zhang D. Multi-adversarial faster-RCNN with paradigm teacher for unrestricted object detection. Int J Comput. 2022;131:680\u2013700. https:\/\/doi.org\/10.1007\/s11263-022-01728-z.","DOI":"10.1007\/s11263-022-01728-z"}],"container-title":["Discover Artificial Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s44163-026-00871-7","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44163-026-00871-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44163-026-00871-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,6]],"date-time":"2026-03-06T11:25:13Z","timestamp":1772796313000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s44163-026-00871-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,2,5]]},"references-count":25,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2026,12]]}},"alternative-id":["871"],"URL":"https:\/\/doi.org\/10.1007\/s44163-026-00871-7","relation":{},"ISSN":["2731-0809"],"issn-type":[{"value":"2731-0809","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,2,5]]},"assertion":[{"value":"12 September 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 January 2026","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"5 February 2026","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"This study does not involve any experiments on humans or animals performed by any of the authors. Therefore, ethical approval was not required. Not applicable. The study does not involve third-party human participants; hence, no consent to participate is required. However, all authors provided informed consent for participation in the study.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Not applicable. Authors clarify that the study primarily uses publicly available benchmark datasets. Additionally, the real-time analysis was conducted using images of the author(s) themselves, for which full consent to use and publish the images has been obtained. All authors have read and approved the final version of the manuscript and consent to its publication.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The authors declare no competing interests.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"186"}}