{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,5]],"date-time":"2026-03-05T04:54:36Z","timestamp":1772686476158,"version":"3.50.1"},"reference-count":35,"publisher":"MDPI AG","issue":"18","license":[{"start":{"date-parts":[[2023,9,20]],"date-time":"2023-09-20T00:00:00Z","timestamp":1695168000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["52006137"],"award-info":[{"award-number":["52006137"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["52376160"],"award-info":[{"award-number":["52376160"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["19YF1423400"],"award-info":[{"award-number":["19YF1423400"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Shanghai Sailing Program","award":["52006137"],"award-info":[{"award-number":["52006137"]}]},{"name":"Shanghai Sailing Program","award":["52376160"],"award-info":[{"award-number":["52376160"]}]},{"name":"Shanghai Sailing Program","award":["19YF1423400"],"award-info":[{"award-number":["19YF1423400"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>This work reports an information-based landmarks assisted simultaneous localization and mapping (InfoLa-SLAM) in large-scale scenes using single-line lidar. The solution employed two novel designs. The first design was a keyframe selection method based on Fisher information, which reduced the computational cost of the nonlinear optimization for the back-end of SLAM by selecting a relatively small number of keyframes while ensuring the accuracy of mapping. The Fisher information was acquired from the point cloud registration between the current frame and the previous keyframe. The second design was an efficient global descriptor for place recognition, which was achieved by designing a unique graphical feature ID to effectively match the local map with the global one. The results showed that compared with traditional keyframe selection strategies (e.g., based on time, angle, or distance), the proposed method allowed for a 35.16% reduction in the number of keyframes in a warehouse with an area of about 10,000 m2. The relocalization module demonstrated a high probability (96%) of correction even under high levels of measurement noise (0.05 m), while the time consumption for relocalization was below 28 ms. The proposed InfoLa-SLAM was also compared with Cartographer under the same dataset. The results showed that InfoLa-SLAM achieved very similar mapping accuracy to Cartographer but excelled in lightweight performance, achieving a 9.11% reduction in the CPU load and a significant 56.67% decrease in the memory consumption.<\/jats:p>","DOI":"10.3390\/rs15184627","type":"journal-article","created":{"date-parts":[[2023,9,20]],"date-time":"2023-09-20T21:47:03Z","timestamp":1695246423000},"page":"4627","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["InfoLa-SLAM: Efficient Lidar-Based Lightweight Simultaneous Localization and Mapping with Information-Based Keyframe Selection and Landmarks Assisted Relocalization"],"prefix":"10.3390","volume":"15","author":[{"ORCID":"https:\/\/orcid.org\/0009-0003-6341-4577","authenticated-orcid":false,"given":"Yuan","family":"Lin","sequence":"first","affiliation":[{"name":"China-UK Low Carbon College, Shanghai Jiao Tong University, Shanghai 201100, China"}]},{"given":"Haiqing","family":"Dong","sequence":"additional","affiliation":[{"name":"Xingji Meizu Group, Wuhan 430056, China"}]},{"given":"Wentao","family":"Ye","sequence":"additional","affiliation":[{"name":"China-UK Low Carbon College, Shanghai Jiao Tong University, Shanghai 201100, China"}]},{"given":"Xue","family":"Dong","sequence":"additional","affiliation":[{"name":"China-UK Low Carbon College, Shanghai Jiao Tong University, Shanghai 201100, China"}]},{"given":"Shuogui","family":"Xu","sequence":"additional","affiliation":[{"name":"Changhai Hospital Affiliated to the Second Military Medical University, Road No. 168, Yangpu District, Shanghai 200433, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,9,20]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1309","DOI":"10.1109\/TRO.2016.2624754","article-title":"Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age","volume":"32","author":"Cadena","year":"2016","journal-title":"IEEE Trans. Robot."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Hess, W., Kohler, D., Rapp, H., and Andor, D. (2016, January 16\u201321). Real-time loop closure in 2D LIDAR SLAM. Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden.","DOI":"10.1109\/ICRA.2016.7487258"},{"key":"ref_3","first-page":"18","article-title":"Edge-SLAM: Edge-Assisted Visual Simultaneous Localization and Mapping","volume":"22","author":"Ali","year":"2022","journal-title":"ACM Trans. Embed. Comput. Syst."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Kuo, J., Muglikar, M., Zhang, Z., and Scaramuzza, D. (August, January 31). Redesigning SLAM for Arbitrary Multi-Camera Systems. Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France.","DOI":"10.1109\/ICRA40945.2020.9197553"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"3063","DOI":"10.3390\/s22083063","article-title":"A Tightly Coupled LiDAR-Inertial SLAM for Perceptually Degraded Scenes","volume":"22","author":"Lin","year":"2022","journal-title":"Sensors"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Shan, T., Englot, B., Meyers, D., Wang, W., Ratti, C., and Rus, D. (2020, January 25\u201329). Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. Proceedings of the 2020 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA.","DOI":"10.1109\/IROS45743.2020.9341176"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"249","DOI":"10.1109\/TRO.2016.2623335","article-title":"SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems","volume":"33","author":"Forster","year":"2017","journal-title":"IEEE Trans. Robot."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"611","DOI":"10.1109\/TPAMI.2017.2658577","article-title":"Direct Sparse Odometry","volume":"40","author":"Engel","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1147","DOI":"10.1109\/TRO.2015.2463671","article-title":"ORB-SLAM: A Versatile and Accurate Monocular SLAM System","volume":"31","author":"Montiel","year":"2015","journal-title":"IEEE Trans. Robot."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"1004","DOI":"10.1109\/TRO.2018.2853729","article-title":"Vins-mono: A robust and versatile monocular visual-inertial state estimator","volume":"34","author":"Qin","year":"2018","journal-title":"IEEE Trans. Robot."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Harmat, A., Sharf, I., and Trentini, M. (2012). Parallel Tracking and Mapping with Multiple Cameras on an Unmanned Aerial Vehicle, Springer.","DOI":"10.1007\/978-3-642-33509-9_42"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Xie, P., Su, W., Li, B., Jian, R., Huang, R., Zhang, S., and Wei, J. (2020, January 6\u20138). Modified Keyframe Selection Algorithm and Map Visualization Based on ORB-SLAM2. Proceedings of the 2020 4th International Conference on Robotics and Automation Sciences (ICRAS), Chengdu, China.","DOI":"10.1109\/ICRAS49812.2020.9135058"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Das, A., and Waslander, S.L. (October, January 28). Entropy based keyframe selection for Multi-Camera Visual SLAM. Proceedings of the 2015 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany.","DOI":"10.1109\/IROS.2015.7353891"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"351","DOI":"10.1109\/TRO.2021.3078287","article-title":"Robust Odometry and Mapping for Multi-LiDAR Systems with Online Extrinsic Calibration","volume":"38","author":"Jiao","year":"2020","journal-title":"IEEE Trans. Robot."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Zhang, J., and Singh, S. (2014, January 12\u201316). LOAM: Lidar odometry and mapping in real-time. Proceedings of the Robotics: Science and Systems, Berkeley, CA, USA.","DOI":"10.15607\/RSS.2014.X.007"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"2827","DOI":"10.1109\/TMM.2019.2913324","article-title":"Real-Time Visual\u2013Inertial SLAM Based on Adaptive Keyframe Selection for Mobile AR Applications","volume":"21","author":"Piao","year":"2019","journal-title":"IEEE Trans. Multimed."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Tang, X., Fu, W., Jiang, M., Peng, G., Wu, Z., Yue, Y., and Wang, D. (2019, January 18\u201320). Place recognition using line-junction-lines in urban environments. Proceedings of the 2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), Bangkok, Thailand.","DOI":"10.1109\/CIS-RAM47153.2019.9095776"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., and Sivic, J. (2016, January 27\u201330). NetVLAD: CNN architecture for weakly supervised place recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.572"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Peng, G., Zhang, J., Li, H., and Wang, D. (2021, January 11\u201317). Attentional pyramid pooling of salient visual residuals for place recognition. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.00092"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Peng, G., Huang, Y., Li, H., Wu, Z., and Wang, D. (2022, January 23\u201327). LSDNet: A Lightweight Self-Attentional Distillation Network for Visual Place Recognition. Proceedings of the 2022 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan.","DOI":"10.1109\/IROS47612.2022.9982272"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Calonder, M., Lepetit, V., Strecha, C., and Fua, P. (2010, January 5\u201311). BRIEF: Binary Robust Independent Elementary Features. Proceedings of the European Conference on Computer Vision, Crete, Greece.","DOI":"10.1007\/978-3-642-15561-1_56"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Jin, S., Wu, Z., Zhao, C., Zhang, J., Peng, G., and Wang, D. (2022, January 23\u201327). SectionKey: 3-D Semantic Point Cloud Descriptor for Place Recognition. Proceedings of the 2022 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan.","DOI":"10.1109\/IROS47612.2022.9981605"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Fan, Y., He, Y., and Tan, U.X. (2020, January 25\u201329). Seed: A Segmentation-Based Egocentric 3D Point Cloud Descriptor for Loop Closure Detection. Proceedings of the 2020 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA.","DOI":"10.1109\/IROS45743.2020.9341517"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Wang, H., Wang, C., and Xie, L. (August, January 31). Intensity Scan Context: Coding Intensity and Geometry Relations for Loop Closure Detection. Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France.","DOI":"10.1109\/ICRA40945.2020.9196764"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Wang, Y., Sun, Z., Xu, C.-Z., Sarma, S.E., Yang, J., and Kong, H. (2020, January 25\u201329). Lidar iris for loop-closure detection. Proceedings of the 2020 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA.","DOI":"10.1109\/IROS45743.2020.9341010"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"He, L., Wang, X., and Zhang, H. (2016, January 9\u201314). M2DP: A novel 3D point cloud descriptor and its application in loop closure detection. Proceedings of the 2016 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea.","DOI":"10.1109\/IROS.2016.7759060"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Kim, G., and Kim, A. (2018, January 1\u20135). Scan Context: Egocentric Spatial Descriptor for Place Recognition within 3D Point Cloud Map. Proceedings of the 2018 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain.","DOI":"10.1109\/IROS.2018.8593953"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Chen, X., L\u00e4be, T., Milioto, A., R\u00f6hling, T., Vysotska, O., Haag, A., Behley, J., and Stachniss, C. (2021, January 12\u201316). OverlapNet: Loop Closing for LiDAR-based SLAM. Proceedings of the Robotics: Science and Systems XVI, Virtual Event.","DOI":"10.15607\/RSS.2020.XVI.009"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Scovanner, P., Ali, S., and Shah, M. (2007, January 25\u201329). A 3-dimensional sift descriptor and its application to action recognition. Proceedings of the 15th ACM International Conference on Multimedia, Augsburg, Germany.","DOI":"10.1145\/1291233.1291311"},{"key":"ref_30","unstructured":"Sipiran, I., and Bustos, B. (2010, January 2). A Robust 3D Interest Points Detector Based on Harris Operator. Proceedings of the Eurographics Workshop on 3D Object Retrieval, Norrk\u00f6ping, Sweden."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Tsourounis, D., Kastaniotis, D., Theoharatos, C., Kazantzidis, A., and Economou, G. (2022). SIFT-CNN: When Convolutional Neural Networks Meet Dense SIFT Descriptors for Image and Sequence Classification. J. Imaging, 8.","DOI":"10.3390\/jimaging8100256"},{"key":"ref_32","unstructured":"Qi, C.R., Su, H., Mo, K., and Guibas, L.J. (2017, January 21\u201326). PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Censi, A. (2007, January 10\u201314). On achievable accuracy for range-finder localization. Proceedings of the Robotics and Automation, Roma, Italy.","DOI":"10.1109\/ROBOT.2007.364120"},{"key":"ref_34","unstructured":"Casella, G., and Berger, R.L. (2021). Statistical Inference, Cengage Learning."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Censi, A. (2009, January 12\u201317). On achievable accuracy for pose tracking. Proceedings of the Robotics and Automation, Kobe, Japan.","DOI":"10.1109\/ROBOT.2009.5152236"}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/18\/4627\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T20:54:15Z","timestamp":1760129655000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/18\/4627"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,20]]},"references-count":35,"journal-issue":{"issue":"18","published-online":{"date-parts":[[2023,9]]}},"alternative-id":["rs15184627"],"URL":"https:\/\/doi.org\/10.3390\/rs15184627","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,9,20]]}}}