{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,8,2]],"date-time":"2025-08-02T17:23:30Z","timestamp":1754155410238,"version":"3.41.2"},"reference-count":21,"publisher":"Emerald","issue":"6","license":[{"start":{"date-parts":[[2023,4,7]],"date-time":"2023-04-07T00:00:00Z","timestamp":1680825600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.emerald.com\/insight\/site-policies"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["IR"],"published-print":{"date-parts":[[2023,11,16]]},"abstract":"<jats:sec>\n<jats:title content-type=\"abstract-subheading\">Purpose<\/jats:title>\n<jats:p>Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm.<\/jats:p>\n<\/jats:sec>\n<jats:sec>\n<jats:title content-type=\"abstract-subheading\">Design\/methodology\/approach<\/jats:title>\n<jats:p>The visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time.<\/jats:p>\n<\/jats:sec>\n<jats:sec>\n<jats:title content-type=\"abstract-subheading\">Findings<\/jats:title>\n<jats:p>Experiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes.<\/jats:p>\n<\/jats:sec>\n<jats:sec>\n<jats:title content-type=\"abstract-subheading\">Originality\/value<\/jats:title>\n<jats:p>A multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.<\/jats:p>\n<\/jats:sec>","DOI":"10.1108\/ir-02-2023-0016","type":"journal-article","created":{"date-parts":[[2023,4,5]],"date-time":"2023-04-05T00:48:36Z","timestamp":1680655716000},"page":"917-929","source":"Crossref","is-referenced-by-count":1,"title":["Laser 3D tightly coupled mapping method based on visual information"],"prefix":"10.1108","volume":"50","author":[{"given":"Sixing","family":"Liu","sequence":"first","affiliation":[]},{"given":"Yan","family":"Chai","sequence":"additional","affiliation":[]},{"given":"Rui","family":"Yuan","sequence":"additional","affiliation":[]},{"given":"Hong","family":"Miao","sequence":"additional","affiliation":[]}],"member":"140","published-online":{"date-parts":[[2023,4,7]]},"reference":[{"issue":"1","key":"key2023111504263236800_ref001","first-page":"1","article-title":"Consistent map building in petrochemical complexes for firefighter robots using slam based on GPS and Lidar","volume":"5","year":"2018","journal-title":"Robomech Journal"},{"issue":"3","key":"key2023111504263236800_ref002","doi-asserted-by":"crossref","first-page":"108","DOI":"10.1109\/MRA.2006.1678144","article-title":"Simultaneous localization and mapping (SLAM): part II","volume":"13","year":"2006","journal-title":"IEEE Robotics & Automation Magazine"},{"issue":"2","key":"key2023111504263236800_ref003","doi-asserted-by":"crossref","first-page":"217","DOI":"10.1049\/ip-cta:20000138","article-title":"Adaptive iterative learning control of uncertain robotic systems","volume":"147","year":"2000","journal-title":"IEE Proceedings-Control Theory and Applications"},{"issue":"6","key":"key2023111504263236800_ref004","doi-asserted-by":"crossref","first-page":"1052","DOI":"10.1109\/TPAMI.2007.1049","article-title":"Mono SLAM: real-time single camera SLAM","volume":"29","year":"2007","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"issue":"6","key":"key2023111504263236800_ref005","doi-asserted-by":"crossref","first-page":"381","DOI":"10.1145\/358669.358692","article-title":"Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography","volume":"24","year":"1981","journal-title":"Communications of the ACM"},{"issue":"5","key":"key2023111504263236800_ref006","doi-asserted-by":"crossref","first-page":"1188","DOI":"10.1109\/TRO.2012.2197158","article-title":"Bags of binary words for fast place recognition in image sequences","volume":"28","year":"2012","journal-title":"IEEE Transactions on Robotics"},{"volume-title":"The Fourteen Lectures on Visual SLAM","year":"2017","key":"key2023111504263236800_ref007"},{"issue":"11","key":"key2023111504263236800_ref008","doi-asserted-by":"crossref","first-page":"1231","DOI":"10.1177\/0278364913491297","article-title":"Vision meets robotics: the KITTI dataset","volume":"32","year":"2013","journal-title":"The International Journal of Robotics Re-Search"},{"issue":"1","key":"key2023111504263236800_ref009","doi-asserted-by":"crossref","first-page":"34","DOI":"10.1109\/TRO.2006.889486","article-title":"Improved techniques for grid mapping with rao-blackwellized particle filters","volume":"23","year":"2007","journal-title":"IEEE Transactions on Robotics"},{"key":"key2023111504263236800_ref010","first-page":"344","article-title":"Development of visual simultaneous localization and mapping (VSLAM) for a pipe inspection robot","volume-title":"International Symposium on Computational Intelligence in Robotics and Automation","year":"2007"},{"issue":"5","key":"key2023111504263236800_ref011","doi-asserted-by":"crossref","first-page":"1147","DOI":"10.1109\/TRO.2015.2463671","article-title":"ORB SLAM: a versatile and accurate monocular SLAM system","volume":"31","year":"2015","journal-title":"IEEE Transactions on Robotics"},{"issue":"33","key":"key2023111504263236800_ref012","first-page":"1255","article-title":"ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras","volume":"5","year":"2016","journal-title":"IEEE Transactions on Robotics"},{"key":"key2023111504263236800_ref013","first-page":"10038","article-title":"Deep depth estimation from Visual-Inertial SLAM","volume-title":"IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS)","year":"2020"},{"article-title":"LeGO-LOAM: lightweight and ground-optimized Lidar odometry and mapping on variable terrain","volume-title":"IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS)","year":"2018","key":"key2023111504263236800_ref014"},{"key":"key2023111504263236800_ref015","first-page":"1","article-title":"Direct visual SLAM using sparse depth for camera-Lidar system","volume-title":"International Conference on Robotics and Automation (ICRA)","year":"2018"},{"article-title":"Estimating uncertain spatial relationships in robotics","volume-title":"Autonomous Robot Vehicles","year":"1990","key":"key2023111504263236800_ref016"},{"issue":"5SI","key":"key2023111504263236800_ref017","first-page":"787","article-title":"Pose interpolation for laser-based visual odometry","volume":"31","year":"2014","journal-title":"Jfield Robot"},{"issue":"4\/5","key":"key2023111504263236800_ref018","first-page":"598","article-title":"Real-time large-scale dense RGB-D SLAM with volumetric fusion","volume":"34","year":"2015","journal-title":"The International Journal of Robotics Research"},{"issue":"4","key":"key2023111504263236800_ref019","first-page":"72","article-title":"Vehicle simultaneous localization and mapping algorithm with Lidar-camera fusion","volume":"21","year":"2021","journal-title":"Journal of Transportation Systems Engineering and Information Technology"},{"issue":"5","key":"key2023111504263236800_ref020","first-page":"9","article-title":"LOAM: Lidar odometry and mapping in real-time","volume":"15","year":"2014","journal-title":"Science and Systems"},{"article-title":"Visual-Lidar odometry and mapping: low-drift, robust, and fast","volume-title":"IEEE International Conference on Robotics and Automation (ICRA)","year":"2015","key":"key2023111504263236800_ref021"}],"container-title":["Industrial Robot: the international journal of robotics research and application"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/IR-02-2023-0016\/full\/xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/IR-02-2023-0016\/full\/html","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,24]],"date-time":"2025-07-24T21:38:31Z","timestamp":1753393111000},"score":1,"resource":{"primary":{"URL":"http:\/\/www.emerald.com\/ir\/article\/50\/6\/917-929\/181801"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,4,7]]},"references-count":21,"journal-issue":{"issue":"6","published-online":{"date-parts":[[2023,4,7]]},"published-print":{"date-parts":[[2023,11,16]]}},"alternative-id":["10.1108\/IR-02-2023-0016"],"URL":"https:\/\/doi.org\/10.1108\/ir-02-2023-0016","relation":{},"ISSN":["0143-991X","0143-991X"],"issn-type":[{"type":"print","value":"0143-991X"},{"type":"electronic","value":"0143-991X"}],"subject":[],"published":{"date-parts":[[2023,4,7]]}}}