{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,19]],"date-time":"2025-09-19T17:35:28Z","timestamp":1758303328102,"version":"3.44.0"},"reference-count":37,"publisher":"Springer Science and Business Media LLC","issue":"10","license":[{"start":{"date-parts":[[2025,4,28]],"date-time":"2025-04-28T00:00:00Z","timestamp":1745798400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,4,28]],"date-time":"2025-04-28T00:00:00Z","timestamp":1745798400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100004837","name":"Ministerio de Ciencia e Innovaci\u00f3n","doi-asserted-by":"publisher","award":["PTAS-202110"],"award-info":[{"award-number":["PTAS-202110"]}],"id":[{"id":"10.13039\/501100004837","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Gobierno de Espa\u00f1a","award":["ID2021-128327OA-I00","TED2021-129374A-I00"],"award-info":[{"award-number":["ID2021-128327OA-I00","TED2021-129374A-I00"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Appl Intell"],"published-print":{"date-parts":[[2025,7]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Autonomous vehicles in logistics and industrial environments demand robust and efficient perception systems. This study presents a LiDAR-based perception system designed for such environments, focusing on real-time deterministic obstacle detection and tracking with limited computational power. The proposed multi-stage approach leverages 3D data from LiDAR sensors. First, ground removal is performed to filter out static ground points. Then, a filtering step is applied using precomputed maps of the navigation area to filter out static zones from the LiDAR point clouds. After, object segmentation distinguishes structural elements from potential obstacles, followed by clustering and Principal Component Analysis (PCA) to accurately estimate obstacle pose and volume. An obstacle-tracking method ensures continuous monitoring over time. Extensive experiments in realistic logistics and industrial scenarios have been performed, comparing the proposed approach to state-of-the-art deep-learning-based methods, demonstrating the system\u2019s high performance in both accuracy and efficiency.<\/jats:p>","DOI":"10.1007\/s10489-025-06528-9","type":"journal-article","created":{"date-parts":[[2025,4,28]],"date-time":"2025-04-28T01:10:20Z","timestamp":1745802620000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["LiDAR-based perception system for logistics in industrial environments"],"prefix":"10.1007","volume":"55","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3684-3996","authenticated-orcid":false,"given":"Mart\u00edn","family":"Palos","sequence":"first","affiliation":[]},{"given":"Irene","family":"Cort\u00e9s","sequence":"additional","affiliation":[]},{"given":"\u00c1ngel","family":"Madridano","sequence":"additional","affiliation":[]},{"given":"Francisco","family":"Navas","sequence":"additional","affiliation":[]},{"given":"Carmen","family":"Barbero","sequence":"additional","affiliation":[]},{"given":"Vicente","family":"Milan\u00e9s","sequence":"additional","affiliation":[]},{"given":"Fernando","family":"Garc\u00eda","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,4,28]]},"reference":[{"key":"6528_CR1","doi-asserted-by":"crossref","unstructured":"Sankari J, Imtiaz R (2016) Automated guided vehicle (agv) for industrial sector. In: 2016 10th international conference on intelligent systems and control (ISCO), IEEE, pp 1\u20135","DOI":"10.1109\/ISCO.2016.7726962"},{"key":"6528_CR2","doi-asserted-by":"crossref","unstructured":"Fatorachian H, Kazemi H (2021) Impact of industry 4.0 on supply chain performance. Prod Plan Control 32(1):63\u201381","DOI":"10.1080\/09537287.2020.1712487"},{"key":"6528_CR3","doi-asserted-by":"crossref","unstructured":"Strametz D, Reip M, Pichler R, Maasem C, H\u00f6ffernig M, Pichler M (2021) Increased agility by using autonomous agvs in reconfigurable factories. In: Advances in automotive production technology\u2013theory and application: Stuttgart Conference on Automotive Production (SCAP2020), Springer, pp 433\u2013440","DOI":"10.1007\/978-3-662-62962-8_50"},{"key":"6528_CR4","doi-asserted-by":"crossref","unstructured":"Lynch L, Newe T, Clifford J, Coleman J, Walsh J, Toal D (2018) Automated ground vehicle (agv) and sensor technologies-a review. In: 2018 12th International Conference on Sensing Technology (ICST), IEEE, pp 347\u2013352","DOI":"10.1109\/ICSensT.2018.8603640"},{"issue":"5","key":"6528_CR5","doi-asserted-by":"publisher","first-page":"332","DOI":"10.3390\/machines10050332","volume":"10","author":"M Pires","year":"2022","unstructured":"Pires M, Couto P, Santos A, Filipe V (2022) Obstacle detection for autonomous guided vehicles through point cloud clustering using depth data. Mach 10(5):332","journal-title":"Mach"},{"issue":"7","key":"6528_CR6","doi-asserted-by":"publisher","first-page":"4316","DOI":"10.1109\/TITS.2020.3032227","volume":"22","author":"K Muhammad","year":"2020","unstructured":"Muhammad K, Ullah A, Lloret J, Del Ser J, Albuquerque VHC (2020) Deep learning for safe autonomous driving: Current challenges and future directions. IEEE Trans Intell Transp Syst 22(7):4316\u20134336","journal-title":"IEEE Trans Intell Transp Syst"},{"issue":"14","key":"6528_CR7","doi-asserted-by":"publisher","first-page":"11016","DOI":"10.1109\/JIOT.2021.3051414","volume":"8","author":"RA Khalil","year":"2021","unstructured":"Khalil RA, Saeed N, Masood M, Fard YM, Alouini MS, Al-Naffouri TY (2021) Deep learning in the industrial internet of things: Potentials, challenges, and emerging applications. IEEE Internet Things J 8(14):11016\u201311040","journal-title":"IEEE Internet Things J"},{"key":"6528_CR8","unstructured":"Horrell M, Reynolds L, McElhinney A (2020) Data science in heavy industry and the internet of things. Harv Data Sci Rev 2(2)"},{"key":"6528_CR9","doi-asserted-by":"crossref","unstructured":"Li D, Chen X, Becchi M, Zong Z (2016) Evaluating the energy efficiency of deep convolutional neural networks on cpus and gpus. In: 2016 IEEE International conferences on big data and cloud computing (BDCloud), social computing and networking (SocialCom), sustainable computing and communications (SustainCom)(BDCloud-SocialCom-SustainCom), IEEE, pp 477\u2013484","DOI":"10.1109\/BDCloud-SocialCom-SustainCom.2016.76"},{"key":"6528_CR10","doi-asserted-by":"crossref","unstructured":"Lin SC, Zhang Y, Hsu CH, Skach M, Haque ME, Tang L, Mars J (2018) The architectural implications of autonomous driving: Constraints and acceleration. In: Proceedings of the 23rd international conference on architectural support for programming languages and operating systems, pp 751\u2013766","DOI":"10.1145\/3173162.3173191"},{"key":"6528_CR11","doi-asserted-by":"crossref","unstructured":"Becker PH, Arnau JM, Gonz\u00e1lez A (2020) Demystifying power and performance bottlenecks in autonomous driving systems. In: 2020 IEEE International Symposium on Workload Characterization (IISWC), IEEE, pp 205\u2013215","DOI":"10.1109\/IISWC50251.2020.00028"},{"issue":"8","key":"6528_CR12","doi-asserted-by":"publisher","first-page":"2708","DOI":"10.1109\/TITS.2018.2790264","volume":"19","author":"Z Rozsa","year":"2018","unstructured":"Rozsa Z, Sziranyi T (2018) Obstacle prediction for automated guided vehicles based on point clouds measured by a tilted lidar sensor. IEEE Trans Intell Transp Syst 19(8):2708\u20132720","journal-title":"IEEE Trans Intell Transp Syst"},{"key":"6528_CR13","doi-asserted-by":"crossref","unstructured":"Ding G, Lu H, Bai J, Qin X (2020) Development of a high precision uwb\/vision-based agv and control system. In: 2020 5th International Conference on Control and Robotics Engineering (ICCRE), IEEE, pp 99\u2013103","DOI":"10.1109\/ICCRE49379.2020.9096456"},{"key":"6528_CR14","doi-asserted-by":"publisher","unstructured":"Li Y, Wang D, Li Q, Cheng G, Li Z, Li P (2024) Advanced 3d navigation system for agv in complex smart factory environments. Electron 13(1). https:\/\/doi.org\/10.3390\/electronics13010130","DOI":"10.3390\/electronics13010130"},{"issue":"2","key":"6528_CR15","doi-asserted-by":"publisher","first-page":"3504","DOI":"10.1109\/TTE.2023.3280738","volume":"10","author":"H Wen","year":"2024","unstructured":"Wen H, Song Z, Liu S, Dong Z, Liu C (2024) A hybrid lidar-based mapping framework for efficient path planning of agvs in a massive indoor environment. IEEE Trans Transp Electrification 10(2):3504\u20133517. https:\/\/doi.org\/10.1109\/TTE.2023.3280738","journal-title":"IEEE Trans Transp Electrification"},{"key":"6528_CR16","doi-asserted-by":"crossref","unstructured":"Buck S, Hanten R, Bohlmann K, Zell A (2016) Generic 3d obstacle detection for agvs using time-of-flight cameras. In: 2016 IEEE\/RSJ international conference on intelligent robots and systems (IROS), IEEE, pp 4119\u20134124","DOI":"10.1109\/IROS.2016.7759606"},{"key":"6528_CR17","doi-asserted-by":"crossref","unstructured":"O\u2019Mahony N, Campbell S, Carvalho A, Harapanahalli S, Hernandez GV, Krpalkova L, Riordan D, Walsh J (2020) Deep learning vs. traditional computer vision. In: Advances in computer vision: Proceedings of the 2019 Computer Vision Conference (CVC), vol 11, Springer, pp 128\u2013144","DOI":"10.1007\/978-3-030-17795-9_10"},{"key":"6528_CR18","unstructured":"Quigley M, Conley K, Gerkey B, Faust J, Foote T, Leibs J, Wheeler R, Ng AY et\u00a0al (2009) Ros: An open-source robot operating system. In: ICRA workshop on open source software, vol 3. Kobe, Japan, p 5"},{"key":"6528_CR19","doi-asserted-by":"crossref","unstructured":"Li Y, Le Bihan C, Pourtau T, Ristorcelli T, Ibanez-Guzman J (2020) Coarse-to-fine segmentation on lidar point clouds in spherical coordinate and beyond. IEEE Trans Veh Technol 69(12):14588\u201314601","DOI":"10.1109\/TVT.2020.3031330"},{"issue":"1","key":"6528_CR20","doi-asserted-by":"publisher","first-page":"34","DOI":"10.1109\/TRO.2006.889486","volume":"23","author":"G Grisetti","year":"2007","unstructured":"Grisetti G, Stachniss C, Burgard W (2007) Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Trans Rob 23(1):34\u201346. https:\/\/doi.org\/10.1109\/TRO.2006.889486","journal-title":"IEEE Trans Rob"},{"key":"6528_CR21","unstructured":"(2017) OpenStreetMap contributors: Planet dump retrieved from https:\/\/planet.osm.org. https:\/\/www.openstreetmap.org"},{"key":"6528_CR22","doi-asserted-by":"publisher","unstructured":"Foote T (2013) tf: The transform library. In: Technologies for Practical Robot Applications (TePRA), 2013 IEEE international conference on. Open-source software workshop, pp 1\u20136. https:\/\/doi.org\/10.1109\/TePRA.2013.6556373","DOI":"10.1109\/TePRA.2013.6556373"},{"key":"6528_CR23","doi-asserted-by":"crossref","unstructured":"Yan Z, Duckett T, Bellotto N (2019) Online learning for 3d lidar-based human detection: Experimental analysis of point cloud clustering and classification methods. Auton Robot","DOI":"10.1007\/s10514-019-09883-y"},{"key":"6528_CR24","unstructured":"(2017) Point Cloud Library: Moment of inertia and eccentricity based descriptors. https:\/\/pcl.readthedocs.io\/en\/latest\/moment_of_inertia.html"},{"key":"6528_CR25","doi-asserted-by":"crossref","unstructured":"Ding N (2023) An efficient convex hull-based vehicle pose estimation method for 3d lidar. arXiv:2302.01034","DOI":"10.1177\/03611981241250027"},{"key":"6528_CR26","doi-asserted-by":"crossref","unstructured":"Sawyer TW, Diaz A, Salcin E, Friedman JS (2021) Using principle component analysis to estimate geometric parameters from point cloud lidar data. In: Laser radar technology and applications XXVI, vol 11744. SPEI, p 1174403","DOI":"10.1117\/12.2585574"},{"key":"6528_CR27","unstructured":"(2021) Point cloud library: Estimating surface normals in a pointcloud. https:\/\/pcl.readthedocs.io\/projects\/tutorials\/en\/pcl-1.12.0\/normal_estimation.html"},{"key":"6528_CR28","doi-asserted-by":"crossref","unstructured":"Beltr\u00e1n J, Guindel C, Cort\u00e9s I, Barrera A, Astudillo A, Urdiales J, \u00c1lvarez M, Bekka F, Milan\u00e9s V, Garc\u00eda F (2020) Towards autonomous driving: A multi-modal 360 perception proposal. In: 2020 IEEE 23rd international conference on intelligent transportation systems (ITSC), IEEE, pp 1\u20136","DOI":"10.1109\/ITSC45102.2020.9294494"},{"issue":"4","key":"6528_CR29","first-page":"20","volume":"14","author":"V Milan\u00e9s","year":"2021","unstructured":"Milan\u00e9s V, Gonz\u00e1lez D, Navas F, Mahtout I, Armand A, Zinoune C, Ramaswamy A, Bekka F, Molina N, Battesti E et al (2021) The tornado project: An automated driving demonstration in peri-urban and rural areas. IEEE Intell Transp Syst Mag 14(4):20\u201336","journal-title":"IEEE Intell Transp Syst Mag"},{"key":"6528_CR30","doi-asserted-by":"crossref","unstructured":"Garcia F, Martin D, De La Escalera A, Armingol JM (2017) Sensor fusion methodology for vehicle detection. IEEE Intell Transp Syst Mag 9(1):123\u2013133","DOI":"10.1109\/MITS.2016.2620398"},{"key":"6528_CR31","doi-asserted-by":"crossref","unstructured":"Geiger A, Lenz P, Urtasun R (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE conference on computer vision and pattern recognition, IEEE, pp 3354\u20133361","DOI":"10.1109\/CVPR.2012.6248074"},{"key":"6528_CR32","unstructured":"Staples G (2020) rosbag - ROS Wiki. http:\/\/wiki.ros.org\/rosbag"},{"issue":"10","key":"6528_CR33","doi-asserted-by":"publisher","first-page":"3337","DOI":"10.3390\/s18103337","volume":"18","author":"Y Yan","year":"2018","unstructured":"Yan Y, Mao Y, Li B (2018) Second: Sparsely embedded convolutional detection. Sens 18(10):3337","journal-title":"Sens"},{"key":"6528_CR34","unstructured":"Geiger A, Lenz P, Stiller C, Urtasun R (2023) 3D object detection evaluation 2017. https:\/\/www.cvlibs.net\/datasets\/kitti\/eval_object.php?obj_benchmark=3d"},{"key":"6528_CR35","unstructured":"Contributors M (2020) MMDetection3D: OpenMMLab next-generation platform for general 3D object detection. https:\/\/github.com\/open-mmlab\/mmdetection3d"},{"key":"6528_CR36","doi-asserted-by":"crossref","unstructured":"Padilla R, Netto SL, Da\u00a0Silva EA (2020) A survey on performance metrics for object-detection algorithms. In: 2020 international conference on systems, signals and image processing (IWSSIP), IEEE, pp 237\u2013242","DOI":"10.1109\/IWSSIP48289.2020.9145130"},{"issue":"3","key":"6528_CR37","doi-asserted-by":"publisher","first-page":"279","DOI":"10.3390\/electronics10030279","volume":"10","author":"R Padilla","year":"2021","unstructured":"Padilla R, Passos WL, Dias TL, Netto SL, Da Silva EA (2021) A comparative analysis of object detection metrics with a companion open-source toolkit. Electron 10(3):279","journal-title":"Electron"}],"container-title":["Applied Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-025-06528-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10489-025-06528-9\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-025-06528-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,19]],"date-time":"2025-09-19T13:57:39Z","timestamp":1758290259000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10489-025-06528-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,4,28]]},"references-count":37,"journal-issue":{"issue":"10","published-print":{"date-parts":[[2025,7]]}},"alternative-id":["6528"],"URL":"https:\/\/doi.org\/10.1007\/s10489-025-06528-9","relation":{},"ISSN":["0924-669X","1573-7497"],"issn-type":[{"type":"print","value":"0924-669X"},{"type":"electronic","value":"1573-7497"}],"subject":[],"published":{"date-parts":[[2025,4,28]]},"assertion":[{"value":"28 March 2025","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 April 2025","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"All authors are directly or indirectly related to R3CAV project, within which the proposed perception system has been developed and its design and implementation funded. Furthermore, Irene Cort\u00e9s, \u00c1ngel Madridano, Francisco Navas and Vicente Milan\u00e9s maintain an employment relation with Renault SA, owner of the vehicle and of the industrial complex where the presented system has been implemented and the described experiments conducted.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing Interests"}},{"value":"None of the aforementioned competing interests has affected the design and execution of this study. Therefore, the authors declare no conflict of interest.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of Interest"}},{"value":"No studies with human participants or animals are performed in this article. Hence, no ethics approval or informed consent is required.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical and Informed Consent for Data Used"}}],"article-number":"696"}}