{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,9]],"date-time":"2026-05-09T17:34:37Z","timestamp":1778348077045,"version":"3.51.4"},"reference-count":29,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2023,1,10]],"date-time":"2023-01-10T00:00:00Z","timestamp":1673308800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>This paper proposes a real-time, versatile Simultaneous Localization and Mapping (SLAM) and object localization system, which fuses measurements from LiDAR, camera, Inertial Measurement Unit (IMU), and Global Positioning System (GPS). Our system can locate itself in an unknown environment and build a scene map based on which we can also track and obtain the global location of objects of interest. Precisely, our SLAM subsystem consists of the following four parts: LiDAR-inertial odometry, Visual-inertial odometry, GPS-inertial odometry, and global pose graph optimization. The target-tracking and positioning subsystem is developed based on YOLOv4. Benefiting from the use of GPS sensor in the SLAM system, we can obtain the global positioning information of the target; therefore, it can be highly useful in military operations, rescue and disaster relief, and other scenarios.<\/jats:p>","DOI":"10.3390\/s23020801","type":"journal-article","created":{"date-parts":[[2023,1,11]],"date-time":"2023-01-11T04:59:58Z","timestamp":1673413198000},"page":"801","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":11,"title":["OL-SLAM: A Robust and Versatile System of Object Localization and SLAM"],"prefix":"10.3390","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6137-8889","authenticated-orcid":false,"given":"Chao","family":"Chen","sequence":"first","affiliation":[{"name":"Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8135-9012","authenticated-orcid":false,"given":"Yukai","family":"Ma","sequence":"additional","affiliation":[{"name":"Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China"}]},{"given":"Jiajun","family":"Lv","sequence":"additional","affiliation":[{"name":"Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0129-1933","authenticated-orcid":false,"given":"Xiangrui","family":"Zhao","sequence":"additional","affiliation":[{"name":"Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China"}]},{"given":"Laijian","family":"Li","sequence":"additional","affiliation":[{"name":"Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China"}]},{"given":"Yong","family":"Liu","sequence":"additional","affiliation":[{"name":"Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China"}]},{"given":"Wang","family":"Gao","sequence":"additional","affiliation":[{"name":"Science and Technology on Complex System Control and Intelligent Agent Cooperation Laboratory, Beijing 100191, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,1,10]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1965","DOI":"10.1016\/j.ifacol.2017.08.162","article-title":"Visual inertial SLAM: Application to unmanned aerial vehicles","volume":"50","author":"Fink","year":"2017","journal-title":"IFAC-PapersOnLine"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Tang, J., Xu, D., Jia, K., and Zhang, L. (2021, January 20\u201325). Learning parallel dense correspondence from spatio-temporal descriptors for efficient and robust 4d reconstruction. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00596"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"431","DOI":"10.1260\/2047-4970.3.2.431","article-title":"4D reconstruction of tangible cultural heritage objects from web-retrieved images","volume":"3","author":"Kyriakaki","year":"2014","journal-title":"Int. J. Herit. Digit. Era"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Mustafa, A., Kim, H., Guillemaut, J., and Hilton, A. (2016, January 27\u201330). Temporally coherent 4d reconstruction of complex dynamic scenes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.504"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Mourikis, A.I., and Roumeliotis, S.I. (2007, January 10\u201314). A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation. Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome, Italy.","DOI":"10.1109\/ROBOT.2007.364024"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1004","DOI":"10.1109\/TRO.2018.2853729","article-title":"Vins-mono: A robust and versatile monocular visual-inertial state estimator","volume":"34","author":"Qin","year":"2018","journal-title":"IEEE Trans. Robot."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"796","DOI":"10.1109\/LRA.2017.2653359","article-title":"Visual-inertial monocular SLAM with map reuse","volume":"2","year":"2017","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Liu, H., Chen, M., Zhang, G., Bao, H., and Bao, Y. (2018, January 18\u201323). Ice-ba: Incremental, consistent and efficient bundle adjustment for visual-inertial slam. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00211"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Shan, T., Englot, B., Meyers, D., Wang, W., Ratti, C., and Rus, D. (2020\u201324, January 24). Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. Proceedings of the 2020 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA.","DOI":"10.1109\/IROS45743.2020.9341176"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"2053","DOI":"10.1109\/TRO.2022.3141876","article-title":"Fast-lio2: Fast direct lidar-inertial odometry","volume":"38","author":"Xu","year":"2022","journal-title":"IEEE Trans. Robot."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"5167","DOI":"10.1109\/LRA.2021.3070251","article-title":"Towards high-performance solid-state-lidar-inertial odometry and mapping","volume":"6","author":"Li","year":"2021","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Shan, T., Englot, B., Ratti, C., and Rus, D. (June, January 30). Lvi-sam: Tightly-coupled lidar-visual-inertial odometry via smoothing and mapping. Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi\u2019an, China.","DOI":"10.1109\/ICRA48506.2021.9561996"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"7469","DOI":"10.1109\/LRA.2021.3095515","article-title":"R2LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual Tightly-Coupled State Estimator and Mapping","volume":"6","author":"Lin","year":"2021","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Zuo, X., Geneva, P., Lee, W., Liu, Y., and Huang, G. (2020\u201324, January 24). Lic-fusion: Lidar-inertial-camera odometry. Proceedings of the 2019 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA.","DOI":"10.1109\/IROS40897.2019.8967746"},{"key":"ref_15","unstructured":"Qin, T., Cao, S., Pan, J., and Shen, S. (2019). A general optimization-based framework for global pose estimation with multiple sensors. arXiv."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Ahmed, H., Ullah, I., Khan, U., Qureshi, M.B., Manzoor, S., Muhammad, N., Shahid Khan, M.U., and Nawaz, R. (2019). Adaptive filtering on GPS-aided MEMS-IMU for optimal estimation of ground vehicle trajectory. Sensors, 19.","DOI":"10.3390\/s19245357"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"67","DOI":"10.1016\/j.inffus.2020.10.018","article-title":"40 years of sensor fusion for orientation tracking via magnetic and inertial measurement units: Methods, lessons learned, and future challenges","volume":"68","author":"Nazarahari","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_18","unstructured":"Bochkovskiy, A., Wang, C.Y., and Liao, H.Y.M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"152","DOI":"10.1016\/j.eswa.2017.01.017","article-title":"Recognition of a landing platform for unmanned aerial vehicles by using computer vision-based techniques","volume":"76","author":"Pajares","year":"2017","journal-title":"Expert Syst. Appl."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Krishna, N.M., Reddy, R.Y., Reddy, M.S.C., Madhav, K.P., and Sudham, G. (2021, January 2\u20134). Object Detection and Tracking Using Yolo. Proceedings of the 2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India.","DOI":"10.1109\/ICIRCA51532.2021.9544598"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Shi, Y., Qayyum, S., Memon, S.A., Khan, U., Imtiaz, J., Ullah, I., Dancey, D., and Nawaz, R. (2020). A modified Bayesian framework for multi-sensor target tracking with out-of-sequence-measurements. Sensors, 20.","DOI":"10.3390\/s20143821"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"90","DOI":"10.1016\/j.isprsjprs.2019.10.015","article-title":"A Frustum-based probabilistic framework for 3D object detection by fusion of LiDAR and camera data","volume":"159","author":"Gong","year":"2020","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_23","unstructured":"Shi, J. (1994, January 21\u201323). Good features to track. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA."},{"key":"ref_24","unstructured":"Lucas, B.D., and Kanade, T. (1981). An Iterative Image Registration Technique with an Application to Stereo Vision, Morgan Kaufmann Publishers Inc."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"012022","DOI":"10.1088\/1742-6596\/659\/1\/012022","article-title":"Performance evaluation of iterated extended Kalman filter with variable step-length","volume":"659","author":"Straka","year":"2015","journal-title":"J. Phys. Conf. Ser."},{"key":"ref_26","unstructured":"Ahmadi, M., Khayatian, A., and Karimaghaee, P. (2007, January 17\u201320). Orientation estimation by error-state extended Kalman filter in quaternion vector space. Proceedings of the SICE Annual Conference 2007, Takamatsu, Japan."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"216","DOI":"10.1177\/0278364911430419","article-title":"iSAM2: Incremental smoothing and mapping using the Bayes tree","volume":"31","author":"Kaess","year":"2012","journal-title":"Int. J. Robot. Res."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Lin, J., and Zhang, F. (2022, January 23\u201327). R 3 LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package. Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA.","DOI":"10.1109\/ICRA46639.2022.9811935"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"7380","DOI":"10.1109\/TPAMI.2021.3119563","article-title":"Detection and Tracking Meet Drones Challenge","volume":"44","author":"Zhu","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/2\/801\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T18:05:54Z","timestamp":1760119554000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/2\/801"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,1,10]]},"references-count":29,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2023,1]]}},"alternative-id":["s23020801"],"URL":"https:\/\/doi.org\/10.3390\/s23020801","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,1,10]]}}}