{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,30]],"date-time":"2026-04-30T02:01:55Z","timestamp":1777514515745,"version":"3.51.4"},"reference-count":56,"publisher":"MDPI AG","issue":"6","license":[{"start":{"date-parts":[[2022,3,13]],"date-time":"2022-03-13T00:00:00Z","timestamp":1647129600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"the National Key R&amp;D Program of China","award":["2019YFC1711200"],"award-info":[{"award-number":["2019YFC1711200"]}]},{"name":"the Key R&amp;D project of Shandong Province of China","award":["2019JZZY010732"],"award-info":[{"award-number":["2019JZZY010732"]}]},{"name":"the Key R&amp;D project of Shandong Province of China","award":["2020CXGC011001"],"award-info":[{"award-number":["2020CXGC011001"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>In recent years, multi-sensor fusion technology has made enormous progress in 3D reconstruction, surveying and mapping, autonomous driving, and other related fields, and extrinsic calibration is a necessary condition for multi-sensor fusion applications. This paper proposes a 3D LIDAR-to-camera automatic calibration framework based on graph optimization. The system can automatically identify the position of the pattern and build a set of virtual feature point clouds, and can simultaneously complete the calibration of the LIDAR and multiple cameras. To test this framework, a multi-sensor system is formed using a mobile robot equipped with LIDAR, monocular and binocular cameras, and the pairwise calibration of LIDAR with two cameras is evaluated quantitatively and qualitatively. The results show that this method can produce more accurate calibration results than the state-of-the-art method. The average error on the camera normalization plane is 0.161 mm, which outperforms existing calibration methods. Due to the introduction of graph optimization, the original point cloud is also optimized while optimizing the external parameters between the sensors, which can effectively correct the errors caused during data collection, so it is also robust to bad data.<\/jats:p>","DOI":"10.3390\/s22062221","type":"journal-article","created":{"date-parts":[[2022,3,13]],"date-time":"2022-03-13T21:44:17Z","timestamp":1647207857000},"page":"2221","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":19,"title":["Automatic Extrinsic Calibration of 3D LIDAR and Multi-Cameras Based on Graph Optimization"],"prefix":"10.3390","volume":"22","author":[{"given":"Jinshun","family":"Ou","sequence":"first","affiliation":[{"name":"School of Mechanical Engineering, Shandong University, Jinan 250061, China"},{"name":"Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China"}]},{"given":"Panling","family":"Huang","sequence":"additional","affiliation":[{"name":"School of Mechanical Engineering, Shandong University, Jinan 250061, China"},{"name":"Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China"}]},{"given":"Jun","family":"Zhou","sequence":"additional","affiliation":[{"name":"School of Mechanical Engineering, Shandong University, Jinan 250061, China"},{"name":"Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China"}]},{"given":"Yifan","family":"Zhao","sequence":"additional","affiliation":[{"name":"School of Mechanical Engineering, Shandong University, Jinan 250061, China"},{"name":"Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China"}]},{"given":"Lebin","family":"Lin","sequence":"additional","affiliation":[{"name":"School of Mechanical Engineering, Shandong University, Jinan 250061, China"},{"name":"Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,3,13]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"237","DOI":"10.1109\/34.982903","article-title":"Vision for mobile robot navigation: A survey","volume":"24","author":"DeSouza","year":"2002","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"159","DOI":"10.1016\/j.optlaseng.2013.08.005","article-title":"Optical 3D laser measurement system for navigation of autonomous mobile robot","volume":"54","author":"Sergiyenko","year":"2014","journal-title":"Opt. Lasers Eng."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"De Silva, V., Roche, J., and Kondoz, A. (2018). Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots. Sensors, 18.","DOI":"10.3390\/s18082730"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Manish, R., Hasheminasab, S.M., Liu, J., Koshan, Y., Mahlberg, J.A., Lin, Y.-C., Ravi, R., Zhou, T., McGuffey, J., and Wells, T. (2022). Image-Aided LiDAR Mapping Platform and Data Processing Strategy for Stockpile Volume Estimation. Remote Sens., 14.","DOI":"10.3390\/rs14010231"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"507","DOI":"10.1002\/esp.4513","article-title":"Mapping river bathymetries: Evaluating topobathymetric LiDAR survey","volume":"44","author":"Tonina","year":"2019","journal-title":"Earth Surf. Processes Landf."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Tuell, G., Barbor, K., and Wozencraft, J. (2010). Overview of the Coastal Zone Mapping and Imaging Lidar (CZMIL): A New Multisensor Airborne Mapping System for the U.S. Army Corps of Engineers, SPIE.","DOI":"10.1117\/12.851905"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Bao, Y., Lin, P., Li, Y., Qi, Y., Wang, Z., Du, W., and Fan, Q. (2021). Parallel Structure from Motion for Sparse Point Cloud Generation in Large-Scale Scenes. Sensors, 21.","DOI":"10.3390\/s21113939"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Jeong, J., Yoon, T.S., and Park, J.B. (2018). Towards a Meaningful 3D Map Using a 3D Lidar and a Camera. Sensors, 18.","DOI":"10.3390\/s18082571"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"273","DOI":"10.1080\/19479832.2013.811124","article-title":"3D building modeling using images and LiDAR: A review","volume":"4","author":"Wang","year":"2013","journal-title":"Int. J. Image Data Fusion"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Vlaminck, M., Luong, H., Goeman, W., and Philips, W. (2016). 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach. Sensors, 16.","DOI":"10.3390\/s16111923"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"493","DOI":"10.1016\/j.optlaseng.2012.10.010","article-title":"Integration of LiDAR data and optical multi-view images for 3D reconstruction of building roofs","volume":"51","author":"Cheng","year":"2013","journal-title":"Opt. Lasers Eng."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Moreno, H., Valero, C., Bengochea-Guevara, J.M., Ribeiro, A., Garrido-Izard, M., and Andujar, D. (2020). On-Ground Vineyard Reconstruction Using a LiDAR-Based Automated System. Sensors, 20.","DOI":"10.3390\/s20041102"},{"key":"ref_13","first-page":"50","article-title":"Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems","volume":"37","author":"Li","year":"2020","journal-title":"IEEE Signal Process. Mag."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Levinson, J., Askeland, J., Becker, J., Dolson, J., Held, D., Kammel, S., Kolter, J.Z., Langer, D., Pink, O., and Pratt, V. (2011, January 5\u20139). Towards fully autonomous driving: Systems and algorithms. Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany.","DOI":"10.1109\/IVS.2011.5940562"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Dehzangi, O., Taherisadr, M., and ChangalVala, R. (2017). IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion. Sensors, 17.","DOI":"10.3390\/s17122735"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Liang, M., Yang, B., Chen, Y., Hu, R., and Urtasun, R. (2019, January 15\u201320). Multi-Task Multi-Sensor Fusion for 3D Object Detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00752"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Wan, G., Yang, X., Cai, R., Li, H., Zhou, Y., Wang, H., and Song, S. (2018, January 21\u201325). Robust and Precise Vehicle Localization Based on Multi-Sensor Fusion in Diverse City Scenes. Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia.","DOI":"10.1109\/ICRA.2018.8461224"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Dimitrievski, M., Veelaert, P., and Philips, W. (2019). Behavioral Pedestrian Tracking Using a Camera and LiDAR Sensors on a Moving Vehicle. Sensors, 19.","DOI":"10.3390\/s19020391"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"3585","DOI":"10.1109\/LRA.2019.2928261","article-title":"A Joint Optimization Approach of LiDAR-Camera Fusion for Accurate Dense 3-D Reconstructions","volume":"4","author":"Zhen","year":"2019","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Debeunne, C., and Vivet, D. (2020). A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping. Sensors, 20.","DOI":"10.3390\/s20072068"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Shaukat, A., Blacker, P.C., Spiteri, C., and Gao, Y. (2016). Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis. Sensors, 16.","DOI":"10.3390\/s16111952"},{"key":"ref_22","first-page":"1","article-title":"Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art","volume":"12","author":"Janai","year":"2017","journal-title":"Found. Trends\u00ae Comput. Graph. Vis."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Banerjee, K., Notz, D., Windelen, J., Gavarraju, S., and He, M. (2018, January 26\u201330). Online Camera LiDAR Fusion and Object Detection on Hybrid Data for Autonomous Driving. Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China.","DOI":"10.1109\/IVS.2018.8500699"},{"key":"ref_24","unstructured":"Vivet, D., Debord, A., and Pag\u00e8s, G. (2019, January 29\u201330). PAVO: A Parallax based Bi-Monocular VO Approach for Autonomous Navigation in Various Environments. Proceedings of the DISP Conference, St Hugh College, Oxford, UK."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"1072","DOI":"10.1109\/TITS.2015.2493160","article-title":"Sensor Fusion Algorithm Design in Detecting Vehicles Using Laser Scanner and Stereo Vision","volume":"17","author":"Kim","year":"2016","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"2364","DOI":"10.1109\/TITS.2016.2639582","article-title":"Traffic Sign Occlusion Detection Using Mobile Laser Scanning Point Clouds","volume":"18","author":"Huang","year":"2017","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Costa, F.A.L., Mitishita, E.A., and Martins, M. (2018). The Influence of Sub-Block Position on Performing Integrated Sensor Orientation Using In Situ Camera Calibration and Lidar Control Points. Remote Sens., 10.","DOI":"10.3390\/rs10020260"},{"key":"ref_28","unstructured":"Heikkila, J., and Silv\u00e9n, O. (1997, January 17\u201319). A four-step camera calibration procedure with implicit image correction. Proceedings of the IEEE Computer Society Conference on Computer Vision & Pattern Recognition, San Juan, PR, USA."},{"key":"ref_29","unstructured":"Bouguet, J.Y. (2022, January 05). Camera Calibration Toolbox for Matlab, 2013. Available online: www.vision.caltech.edu\/bouguetj."},{"key":"ref_30","unstructured":"Qilong, Z., and Pless, R. (October, January 28). Extrinsic calibration of a camera and laser range finder (improves camera calibration). Proceedings of the 2004 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), Sendai, Japan."},{"key":"ref_31","unstructured":"Unnikrishnan, R., and Hebert, M. (2005). Fast Extrinsic Calibration of a Laser Rangefinder to a Camera, Robotics Institute. Tech. Rep. CMU-RI-TR-05-09."},{"key":"ref_32","unstructured":"Kassir, A., and Peynot, T. (2010). Reliable automatic camera-laser calibration. Proceedings of the 2010 Australasian Conference on Robotics & Automation, Australian Robotics and Automation Association (ARAA)."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Geiger, A., Moosmann, F., Car, O., and Schuster, B. (2012, January 14\u201318). Automatic Camera and Range Sensor Calibration Using a Single Shot. Proceedings of the IEEE International Conference on Robotics & Automation, Saint Paul, MN, USA.","DOI":"10.1109\/ICRA.2012.6224570"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Kim, E.S., and Park, S.Y. (2019). Extrinsic Calibration between Camera and LiDAR Sensors by Matching Multiple 3D Planes. Sensors, 20.","DOI":"10.3390\/s20010052"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"775","DOI":"10.1002\/rob.21540","article-title":"Leveraging Image-based Localization for Infrastructure-based Calibration of a Multi-camera Rig","volume":"32","author":"Heng","year":"2015","journal-title":"J. Field Robot."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Bai, Z., Jiang, G., and Xu, A. (2020). LiDAR-Camera Calibration Using Line Correspondences. Sensors, 20.","DOI":"10.3390\/s20216319"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Schneider, N., Piewak, F., Stiller, C., and Franke, U. (2017). RegNet: Multimodal Sensor Registration Using Deep Neural Networks. IEEE Int. Veh. Sym., 1803\u20131810.","DOI":"10.1109\/IVS.2017.7995968"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Iyer, G., Ram, R.K., Murthy, J.K., and Krishna, K.M. (2019, January 3\u20138). CalibNet: Geometrically Supervised Extrinsic Calibration using 3D Spatial Transformer Networks. Proceedings of the 2018 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China.","DOI":"10.1109\/IROS.2018.8593693"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Lv, X., Wang, S., and Ye, D. (2021). CFNet: LiDAR-Camera Registration Using Calibration Flow Network. Sensors, 21.","DOI":"10.3390\/s21238112"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"114","DOI":"10.1049\/iet-opt.2017.0067","article-title":"Research on the ranging statistical distribution of laser radar with a constant fraction discriminator","volume":"12","author":"Lai","year":"2018","journal-title":"IET Optoelectron."},{"key":"ref_41","unstructured":"Dhall, A., Chelani, K., Radhakrishnan, V., and Krishna, K.M. (2017). LiDAR-Camera Calibration using 3D-3D Point correspondences. arXiv."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"1330","DOI":"10.1109\/34.888718","article-title":"A flexible new technique for camera calibration","volume":"22","author":"Zhang","year":"2000","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"381","DOI":"10.1145\/358669.358692","article-title":"Random Sample Consensus\u2014A Paradigm for Model-Fitting with Applications to Image-Analysis and Automated Cartography","volume":"24","author":"Fischler","year":"1981","journal-title":"Commun. ACM"},{"key":"ref_44","unstructured":"Lucchese, L., and Mitra, S.K. (July, January 29). Using saddle points for subpixel feature detection in camera calibration targets. Proceedings of the Conference on Circuits & Systems, Chengdu, China."},{"key":"ref_45","first-page":"120","article-title":"The OpenCV library","volume":"25","author":"Bradski","year":"2000","journal-title":"Dr. Dobb J. Softw. Tools Prof. Program."},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"907","DOI":"10.1016\/j.imavis.2006.07.005","article-title":"Lie algebra approach for tracking and 3D motion estimation using monocular vision","volume":"25","year":"2007","journal-title":"Image Vis. Comput."},{"key":"ref_47","unstructured":"Loukides, M. (2008). Learning OpenCV: Computer vision with the OpenCV library, O\u2019Reilly Media, Inc.. [1st ed.]."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Carlone, L. (2013, January 6\u201310). A convergence analysis for pose graph optimization via Gauss-Newton methods. Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany.","DOI":"10.1109\/ICRA.2013.6630690"},{"key":"ref_49","first-page":"101","article-title":"The Levenberg-Marquardt Algorithm","volume":"11","author":"Ranganathan","year":"2004","journal-title":"Tutoral LM Algorithm"},{"key":"ref_50","unstructured":"Xiang, G., Tao, Z., and Yi, L. (2017). Fourteen Lectures on Visual SLAM: From Theory to Practice, Publishing House of Electronics Industry."},{"key":"ref_51","unstructured":"Kummerle, R., Grisetti, G., Strasdat, H., Konolige, K., and Burgard, W. (2011, January 9\u201313). G2O: A general framework for graph optimization. Proceedings of the IEEE International Conference on Robotics & Automation, Shanghai, China."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Triggs, B., McLauchlan, P.F., Hartley, R.I., and Fitzgibbon, A.W. (1999, January 20). Bundle Adjustment\u2014A Modern Synthesis. Proceedings of the Vision Algorithms: Theory and Practice, Corfu, Greece.","DOI":"10.1007\/3-540-44480-7_21"},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"2","DOI":"10.1145\/1486525.1486527","article-title":"SBA: A Software Package for Generic Sparse Bundle Adjustment","volume":"36","author":"Lourakis","year":"2009","journal-title":"ACM Trans. Math. Softw. (TOMS)"},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"155","DOI":"10.1007\/s11263-008-0152-6","article-title":"EPnP: An Accurate O(n) Solution to the PnP Problem","volume":"81","author":"Lepetit","year":"2009","journal-title":"Int. J. Comput. Vis."},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"481","DOI":"10.1016\/j.patcog.2015.09.023","article-title":"Generation of fiducial marker dictionaries using Mixed Integer Linear Programming","volume":"51","year":"2016","journal-title":"Pattern Recognit."},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"38","DOI":"10.1016\/j.imavis.2018.05.004","article-title":"Speeded up detection of squared fiducial markers","volume":"76","year":"2018","journal-title":"Image Vis. Comput."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/6\/2221\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T22:35:50Z","timestamp":1760135750000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/6\/2221"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,3,13]]},"references-count":56,"journal-issue":{"issue":"6","published-online":{"date-parts":[[2022,3]]}},"alternative-id":["s22062221"],"URL":"https:\/\/doi.org\/10.3390\/s22062221","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,3,13]]}}}