{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T01:23:41Z","timestamp":1760059421214,"version":"build-2065373602"},"reference-count":93,"publisher":"MDPI AG","issue":"6","license":[{"start":{"date-parts":[[2025,6,15]],"date-time":"2025-06-15T00:00:00Z","timestamp":1749945600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Mexican Ministry of Science, Humanities, Technology and Innovation (SECIHTI)"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Entropy"],"abstract":"<jats:p>Robust indoor robot navigation typically demands either costly sensors or extensive training data. We propose a cost-effective RGB-D navigation pipeline that couples feature-based relative pose estimation with a lightweight multi-layer-perceptron (MLP) policy. RGB-D keyframes extracted from human-driven traversals form nodes of a topological map; edges are added when visual similarity and geometric\u2013kinematic constraints are jointly satisfied. During autonomy, LightGlue features and SVD give six-DoF relative pose to the active keyframe, and the MLP predicts one of four discrete actions. Low visual similarity or detected obstacles trigger graph editing and Dijkstra replanning in real time. Across eight tasks in four Habitat-Sim environments, the agent covered 190.44 m, replanning when required, and consistently stopped within 0.1 m of the goal while running on commodity hardware. An information-theoretic analysis over the Multi-Illumination dataset shows that LightGlue maximizes per-second information gain under lighting changes, motivating its selection. The modular design attains reliable navigation without metric SLAM or large-scale learning, and seamlessly accommodates future perception or policy upgrades.<\/jats:p>","DOI":"10.3390\/e27060641","type":"journal-article","created":{"date-parts":[[2025,6,16]],"date-time":"2025-06-16T10:47:22Z","timestamp":1750070842000},"page":"641","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Efficient Learning-Based Robotic Navigation Using Feature-Based RGB-D Pose Estimation and Topological Maps"],"prefix":"10.3390","volume":"27","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-5175-0384","authenticated-orcid":false,"given":"Eder A.","family":"Rodr\u00edguez-Mart\u00ednez","sequence":"first","affiliation":[{"name":"Faculty of Engineering, Autonomous University of Baja California, Blvd. Benito Ju\u00e1rez, Mexicali 21280, Mexico"},{"name":"National Postdoctoral Fellowships, Ministry of Science, Humanities, Technology and Innovation, Insurgentes Sur, Mexico City 03940, Mexico"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0618-0455","authenticated-orcid":false,"given":"Jes\u00fas El\u00edas","family":"Miranda-Vega","sequence":"additional","affiliation":[{"name":"Department of Electrical and Electronic Engineering, Tecnol\u00f3gico Nacional de M\u00e9xico\/IT Mexicali, Mexicali 21376, Mexico"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4813-1635","authenticated-orcid":false,"given":"Farouk","family":"Achakir","sequence":"additional","affiliation":[{"name":"Belive AI Lab, Belive.ai (former VusionGroup), 21 Rue Millevoye, 80000 Amiens, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4270-6872","authenticated-orcid":false,"given":"Oleg","family":"Sergiyenko","sequence":"additional","affiliation":[{"name":"Institute of Engineering, Autonomous University of Baja California, Calle Normal, Mexicali 21100, Mexico"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1830-0226","authenticated-orcid":false,"given":"Julio C.","family":"Rodr\u00edguez-Qui\u00f1onez","sequence":"additional","affiliation":[{"name":"Faculty of Engineering, Autonomous University of Baja California, Blvd. Benito Ju\u00e1rez, Mexicali 21280, Mexico"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0055-4797","authenticated-orcid":false,"given":"Daniel","family":"Hern\u00e1ndez Balbuena","sequence":"additional","affiliation":[{"name":"Faculty of Engineering, Autonomous University of Baja California, Blvd. Benito Ju\u00e1rez, Mexicali 21280, Mexico"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1477-7449","authenticated-orcid":false,"given":"Wendy","family":"Flores-Fuentes","sequence":"additional","affiliation":[{"name":"Faculty of Engineering, Autonomous University of Baja California, Blvd. Benito Ju\u00e1rez, Mexicali 21280, Mexico"}]}],"member":"1968","published-online":{"date-parts":[[2025,6,15]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"9604","DOI":"10.1109\/TNNLS.2022.3167688","article-title":"Perception and navigation in autonomous systems in the era of learning: A survey","volume":"34","author":"Tang","year":"2022","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"4","DOI":"10.1109\/MAES.2022.3187142","article-title":"I am not afraid of the GPS jammer: Resilient navigation via signals of opportunity in GPS-denied environments","volume":"37","author":"Kassas","year":"2022","journal-title":"IEEE Aerosp. Electron. Syst. Mag."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"104533","DOI":"10.1016\/j.robot.2023.104533","article-title":"A review of UAV autonomous navigation in GPS-denied environments","volume":"170","author":"Chang","year":"2023","journal-title":"Robot. Auton. Syst."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1389","DOI":"10.1109\/JAS.2021.1004084","article-title":"An RGB-D camera based visual positioning system for assistive navigation by a robotic navigation aid","volume":"8","author":"Zhang","year":"2021","journal-title":"IEEE\/CAA J. Autom. Sin."},{"key":"ref_5","first-page":"1","article-title":"Robust pose estimation via hybrid point and twin line reprojection for RGB-D vision navigation","volume":"71","author":"Sun","year":"2022","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"107419","DOI":"10.1016\/j.compag.2022.107419","article-title":"An RGB-D multi-view perspective for autonomous agricultural robots","volume":"202","author":"Vulpi","year":"2022","journal-title":"Comput. Electron. Agric."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"815","DOI":"10.1631\/FITEE.2000097","article-title":"A survey on indoor 3D modeling and applications via RGB-D devices","volume":"22","author":"Yuan","year":"2021","journal-title":"Front. Inf. Technol. Electron. Eng."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Skocze\u0144, M., Ochman, M., Spyra, K., Nikodem, M., Krata, D., Panek, M., and Paw\u0142owski, A. (2021). Obstacle detection system for agricultural mobile robot application using RGB-D cameras. Sensors, 21.","DOI":"10.3390\/s21165292"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Chen, W., Shang, G., Ji, A., Zhou, C., Wang, X., Xu, C., Li, Z., and Hu, K. (2022). An overview on visual slam: From tradition to semantic. Remote Sens., 14.","DOI":"10.3390\/rs14133010"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"54","DOI":"10.1007\/s00138-022-01306-w","article-title":"A comprehensive overview of dynamic visual SLAM and deep learning: Concepts, methods and challenges","volume":"33","author":"Beghdadi","year":"2022","journal-title":"Mach. Vis. Appl."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Shah, D., Eysenbach, B., Kahn, G., Rhinehart, N., and Levine, S. (June, January 30). Ving: Learning open-world navigation with visual goals. Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi\u2019an, China.","DOI":"10.1109\/ICRA48506.2021.9561936"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Liu, R., Wang, X., Wang, W., and Yang, Y. (2023, January 1\u20136). Bird\u2019s-Eye-View Scene Graph for Vision-Language Navigation. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Paris, France.","DOI":"10.1109\/ICCV51070.2023.01007"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Wang, H., Wang, W., Liang, W., Xiong, C., and Shen, J. (2021, January 20\u201325). Structured scene memory for vision-language navigation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00835"},{"key":"ref_14","first-page":"36858","article-title":"Towards versatile embodied navigation","volume":"35","author":"Wang","year":"2022","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"107945","DOI":"10.1016\/j.cnsns.2024.107945","article-title":"Anti-disturbance state estimation for PDT-switched RDNNs utilizing time-sampling and space-splitting measurements","volume":"132","author":"Song","year":"2024","journal-title":"Commun. Nonlinear Sci. Numer. Simul."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"105513","DOI":"10.1016\/j.conengprac.2023.105513","article-title":"1 bit encoding\u2013decoding-based event-triggered fixed-time adaptive control for unmanned surface vehicle with guaranteed tracking performance","volume":"135","author":"Song","year":"2023","journal-title":"Control Eng. Pract."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"123","DOI":"10.1016\/j.isatra.2023.07.043","article-title":"Q-learning based fault estimation and fault tolerant iterative learning control for MIMO systems","volume":"142","author":"Wang","year":"2023","journal-title":"ISA Trans."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Li, J., Wang, X., Tang, S., Shi, H., Wu, F., Zhuang, Y., and Wang, W.Y. (2020, January 13\u201319). Unsupervised reinforcement learning of transferable meta-skills for embodied navigation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01214"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Guo, J., Lu, Z., Wang, T., Huang, W., and Liu, H. (2021). Object goal visual navigation using Semantic Spatial Relationships. Proceedings of the CAAI International Conference on Artificial Intelligence, Springer.","DOI":"10.1007\/978-3-030-93046-2_7"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"1729881420929498","DOI":"10.1177\/1729881420929498","article-title":"Autonomous navigation and obstacle avoidance of an omnidirectional mobile robot using swarm optimization and sensors deployment","volume":"17","author":"Ajeil","year":"2020","journal-title":"Int. J. Adv. Robot. Syst."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Mendes, E., Koch, P., and Lacroix, S. (2016, January 23\u201327). ICP-based pose-graph SLAM. Proceedings of the 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Lausanne, Switzerland.","DOI":"10.1109\/SSRR.2016.7784298"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Bai, H. (2022). ICP Algorithm: Theory, Practice And Its SLAM-oriented Taxonomy. arXiv.","DOI":"10.54254\/2755-2721\/2\/ojs\/20220512"},{"key":"ref_23","unstructured":"Huang, X., Mei, G., Zhang, J., and Abbas, R. (2021). A comprehensive survey on point cloud registration. arXiv."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Grossberg, S. (2020). A path toward explainable AI and autonomous adaptive intelligence: Deep learning, adaptive resonance, and models of perception, emotion, and action. Front. Neurorobot., 14.","DOI":"10.3389\/fnbot.2020.00036"},{"key":"ref_25","unstructured":"Kumar, A., Gupta, S., Fouhey, D., Levine, S., and Malik, J. (2018, January 3\u20138). Visual memory for robust path following. Proceedings of the 32nd International Conference on Neural Information Processing System, Montreal, QC, Canada."},{"key":"ref_26","first-page":"4247","article-title":"Object goal navigation using goal-oriented semantic exploration","volume":"33","author":"Chaplot","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Kwon, O., Kim, N., Choi, Y., Yoo, H., Park, J., and Oh, S. (2021, January 11\u201317). Visual graph memory with unsupervised representation for visual navigation. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.01559"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Wang, F., Zhang, C., Zhang, W., Fang, C., Xia, Y., Liu, Y., and Dong, H. (2022). Object-based reliable visual navigation for mobile robot. Sensors, 22.","DOI":"10.3390\/s22062387"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Lindenberger, P., Sarlin, P.E., and Pollefeys, M. (2023, January 1\u20136). LightGlue: Local Feature Matching at Light Speed. Proceedings of the ICCV 2023, Paris, France.","DOI":"10.1109\/ICCV51070.2023.01616"},{"key":"ref_30","unstructured":"Puig, X., Undersander, E., Szot, A., Cote, M.D., Yang, T.Y., Partsey, R., Desai, R., Clegg, A.W., Hlavac, M., and Min, S.Y. (2023). Habitat 3.0: A co-habitat for humans, avatars and robots. arXiv."},{"key":"ref_31","unstructured":"Szot, A., Clegg, A., Undersander, E., Wijmans, E., Zhao, Y., Turner, J., Maestre, N., Mukadam, M., Chaplot, D., and Maksymets, O. (2021, January 6\u201314). Habitat 2.0: Training Home Assistants to Rearrange their Habitat. Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Online."},{"key":"ref_32","unstructured":"Savva, M., Kadian, A., Maksymets, O., Zhao, Y., Wijmans, E., Jain, B., Straub, J., Liu, J., Koltun, V., and Malik, J. (November, January 27). Habitat: A platform for embodied ai research. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"534","DOI":"10.1002\/rob.20342","article-title":"Visual teach and repeat for long-range rover autonomy","volume":"27","author":"Furgale","year":"2010","journal-title":"J. Field Robot."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Zheng, S., Wang, J., Rizos, C., Ding, W., and El-Mowafy, A. (2023). Simultaneous localization and mapping (slam) for autonomous driving: Concept and analysis. Remote Sens., 15.","DOI":"10.3390\/rs15041156"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Keetha, N., Karhade, J., Jatavallabhula, K.M., Yang, G., Scherer, S., Ramanan, D., and Luiten, J. (2024, January 16\u201324). SplaTAM: Splat Track & Map 3D Gaussians for Dense RGB-D SLAM. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR52733.2024.02018"},{"key":"ref_36","first-page":"8403","article-title":"Linear RGB-D SLAM for structured environments","volume":"44","author":"Joo","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Ferrari, V., Hebert, M., Sminchisescu, C., and Weiss, Y. (2018). Linear RGB-D SLAM for Planar Environments. Proceedings of the Computer Vision\u2014ECCV 2018, Springer.","DOI":"10.1007\/978-3-030-01228-1"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"1047","DOI":"10.1109\/LRA.2017.2656241","article-title":"Fast, on-line collision avoidance for dynamic vehicles using buffered voronoi cells","volume":"2","author":"Zhou","year":"2017","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"972","DOI":"10.1109\/TCST.2019.2949540","article-title":"Optimization-based collision avoidance","volume":"29","author":"Zhang","year":"2020","journal-title":"IEEE Trans. Control Syst. Technol."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Jiang, H., Wang, H., Yau, W.Y., and Wan, K.W. (2020, January 9\u201313). A brief survey: Deep reinforcement learning in mobile robot navigation. Proceedings of the 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA), Kristiansand, Norway.","DOI":"10.1109\/ICIEA48937.2020.9248288"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Beeching, E., Dibangoye, J., Simonin, O., and Wolf, C. (2020, January 23\u201328). Learning to plan with uncertain topological maps. Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK.","DOI":"10.1007\/978-3-030-58580-8_28"},{"key":"ref_42","unstructured":"Chaplot, D.S., Salakhutdinov, R., Gupta, A., and Gupta, S. (2020, January 13\u201319). Neural Topological SLAM for Visual Navigation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA."},{"key":"ref_43","unstructured":"Andreasson, H., and Duckett, T. (2005, January 2\u20136). Incremental Robot Mapping with Fingerprints of Places. Proceedings of the IFAC Proceedings Volumes, Edmonton, AB, Canada."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Angeli, A., Doncieux, S., Meyer, J.A., and Filliat, D. (2009, January 12\u201317). Visual topological SLAM and global localization. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan.","DOI":"10.1109\/ROBOT.2009.5152501"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"55","DOI":"10.1007\/s10846-021-01390-6","article-title":"Image-based indoor topological navigation with collision avoidance for resource-constrained mobile robots","volume":"102","author":"Bista","year":"2021","journal-title":"J. Intell. Robot. Syst."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Muravyev, K., and Yakovlev, K. (2024). NavTopo: Leveraging Topological Maps for Autonomous Navigation of a Mobile Robot. Proceedings of the International Conference on Interactive Collaborative Robotics, Springer.","DOI":"10.1007\/978-3-031-71360-6_11"},{"key":"ref_47","first-page":"11310","article-title":"Photometric-Planner for Visual Path Following","volume":"21","author":"Caron","year":"2020","journal-title":"IEEE Sens. J."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"1933","DOI":"10.1109\/JAS.2024.124332","article-title":"Cognitive navigation for intelligent mobile robots: A learning-based approach with topological memory configuration","volume":"11","author":"Liu","year":"2024","journal-title":"IEEE\/CAA J. Autom. Sin."},{"key":"ref_49","unstructured":"Besl, P.J., and McKay, N.D. (1991, January 12\u201315). Method for registration of 3-D shapes. Proceedings of the Sensor fusion IV: Control Paradigms and Data Structures, SPIE, Boston, MA, USA."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Macario Barros, A., Michel, M., Moline, Y., Corre, G., and Carrel, F. (2022). A comprehensive survey of visual slam algorithms. Robotics, 11.","DOI":"10.3390\/robotics11010024"},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Tiar, R., Lakrouf, M., and Azouaoui, O. (2015, January 22\u201324). Fast ICP-SLAM for a bi-steerable mobile robot in large environments. Proceedings of the 2015 IEEE International Workshop of Electronics, Control, Measurement, Signals and their Application to Mechatronics (ECMSM), Liberec, Czech Republic.","DOI":"10.1109\/ECMSM.2015.7208683"},{"key":"ref_52","first-page":"8288","article-title":"Rigid 3-D registration: A simple method free of SVD and eigendecomposition","volume":"69","author":"Wu","year":"2020","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"7352691","DOI":"10.1155\/2018\/7352691","article-title":"A point cloud registration algorithm based on feature extraction and matching","volume":"2018","author":"Liu","year":"2018","journal-title":"Math. Probl. Eng."},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"555","DOI":"10.1007\/s10846-019-01136-5","article-title":"Extending maps with semantic and contextual object information for robot navigation: A learning-based framework using visual and depth cues","volume":"99","author":"Martins","year":"2020","journal-title":"J. Intell. Robot. Syst."},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"Liu, R., Wang, W., and Yang, Y. (2024, January 16\u201322). Volumetric Environment Representation for Vision-Language Navigation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR52733.2024.01544"},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Wang, X., Wang, W., Shao, J., and Yang, Y. (2023, January 17\u201324). Lana: A language-capable navigator for instruction following and generation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.01826"},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"365","DOI":"10.1007\/s10462-022-10174-9","article-title":"Visual language navigation: A survey and open challenges","volume":"56","author":"Park","year":"2023","journal-title":"Artif. Intell. Rev."},{"key":"ref_58","unstructured":"Liu, H., Xue, W., Chen, Y., Chen, D., Zhao, X., Wang, K., Hou, L., Li, R., and Peng, W. (2024). A survey on hallucination in large vision-language models. arXiv."},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Rodr\u00edguez Mart\u00ednez, E.A., Caron, G., P\u00e9gard, C., and Lara-Alabazares, D. (August, January 31). Photometric Path Planning for Vision-Based Navigation. Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France.","DOI":"10.1109\/ICRA40945.2020.9197091"},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Nevada, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., and Fei-Fei, L. (2009, January 20\u201325). Imagenet: A large-scale hierarchical image database. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"ref_62","unstructured":"Wang, Y. (2022). A Survey on Efficient Processing of Similarity Queries over Neural Embeddings. arXiv."},{"key":"ref_63","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_64","doi-asserted-by":"crossref","first-page":"551","DOI":"10.1137\/1035134","article-title":"On the early history of the singular value decomposition","volume":"35","author":"Stewart","year":"1993","journal-title":"SIAM Rev."},{"key":"ref_65","doi-asserted-by":"crossref","first-page":"23","DOI":"10.1007\/s11263-020-01359-2","article-title":"Image matching from handcrafted to deep features: A survey","volume":"129","author":"Ma","year":"2021","journal-title":"Int. J. Comput. Vis."},{"key":"ref_66","doi-asserted-by":"crossref","first-page":"444","DOI":"10.1002\/nme.2586","article-title":"Rigid body dynamics in terms of quaternions: Hamiltonian formulation and conserving numerical integration","volume":"79","author":"Betsch","year":"2009","journal-title":"Int. J. Numer. Methods Eng."},{"key":"ref_67","doi-asserted-by":"crossref","first-page":"269","DOI":"10.1007\/BF01386390","article-title":"A note on two problems in connexion with graphs","volume":"1","author":"Dijkstra","year":"1959","journal-title":"Numer. Math."},{"key":"ref_68","unstructured":"Murmann, L., Gharbi, M., Aittala, M., and Durand, F. (November, January 27). A dataset of multi-illumination images in the wild. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, South Korea."},{"key":"ref_69","doi-asserted-by":"crossref","unstructured":"Tareen, S.A.K., and Saleem, Z. (2018, January 3\u20134). A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK. Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan.","DOI":"10.1109\/ICOMET.2018.8346440"},{"key":"ref_70","doi-asserted-by":"crossref","first-page":"91","DOI":"10.1023\/B:VISI.0000029664.99615.94","article-title":"Distinctive image features from scale-invariant keypoints","volume":"60","author":"Lowe","year":"2004","journal-title":"Int. J. Comput. Vis."},{"key":"ref_71","doi-asserted-by":"crossref","unstructured":"Sarlin, P.E., DeTone, D., Malisiewicz, T., and Rabinovich, A. (2020, January 13\u201319). SuperGlue: Learning Feature Matching with Graph Neural Networks. Proceedings of the CVPR 2020, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00499"},{"key":"ref_72","doi-asserted-by":"crossref","first-page":"379","DOI":"10.1002\/j.1538-7305.1948.tb01338.x","article-title":"A mathematical theory of communication","volume":"27","author":"Shannon","year":"1948","journal-title":"Bell Syst. Tech. J."},{"key":"ref_73","first-page":"12","article-title":"Entropy, relative entropy and mutual information","volume":"2","author":"Cover","year":"1991","journal-title":"Elem. Inf. Theory"},{"key":"ref_74","doi-asserted-by":"crossref","unstructured":"Chang, A., Dai, A., Funkhouser, T., Halber, M., Niessner, M., Savva, M., Song, S., Zeng, A., and Zhang, Y. (2017, January 10\u201312). Matterport3D: Learning from RGB-D Data in Indoor Environments. Proceedings of the International Conference on 3D Vision (3DV), Qingdao, China.","DOI":"10.1109\/3DV.2017.00081"},{"key":"ref_75","doi-asserted-by":"crossref","unstructured":"Placed, J.A., Rodr\u00edguez, J.J.G., Tard\u00f3s, J.D., and Castellanos, J.A. (2022). ExplORB-SLAM: Active visual SLAM exploiting the pose-graph topology. Proceedings of the Iberian Robotics Conference, Springer.","DOI":"10.1007\/978-3-031-21065-5_17"},{"key":"ref_76","doi-asserted-by":"crossref","unstructured":"Jiang, H., Karpur, A., Cao, B., Huang, Q., and Araujo, A. (2024). OmniGlue: Generalizable Feature Matching with Foundation Model Guidance. arXiv.","DOI":"10.1109\/CVPR52733.2024.01878"},{"key":"ref_77","doi-asserted-by":"crossref","first-page":"101787","DOI":"10.1016\/j.aei.2022.101787","article-title":"Robotics in construction: A critical review of the reinforcement learning and imitation learning paradigms","volume":"54","author":"Delgado","year":"2022","journal-title":"Adv. Eng. Inform."},{"key":"ref_78","doi-asserted-by":"crossref","first-page":"82","DOI":"10.1109\/JAS.2019.1911825","article-title":"Deep imitation learning for autonomous vehicles based on convolutional neural networks","volume":"7","author":"Kebria","year":"2019","journal-title":"IEEE\/CAA J. Autom. Sin."},{"key":"ref_79","doi-asserted-by":"crossref","unstructured":"Maur\u00edcio, J., Domingues, I., and Bernardino, J. (2023). Comparing vision transformers and convolutional neural networks for image classification: A literature review. Appl. Sci., 13.","DOI":"10.3390\/app13095521"},{"key":"ref_80","doi-asserted-by":"crossref","unstructured":"Muflikhah, L., and Baharudin, B. (2009, January 13\u201315). Document clustering using concept space and cosine similarity measurement. Proceedings of the 2009 International Conference on Computer Technology and Development, Kota Kinabalu, Malaysia.","DOI":"10.1109\/ICCTD.2009.206"},{"key":"ref_81","doi-asserted-by":"crossref","unstructured":"Ruan, S., Dong, Y., Su, H., Peng, J., Chen, N., and Wei, X. (2023). Improving viewpoint robustness for visual recognition via adversarial training. arXiv.","DOI":"10.1109\/ICCV51070.2023.00434"},{"key":"ref_82","unstructured":"Hendrycks, D., Mu, N., Cubuk, E.D., Zoph, B., Gilmer, J., and Lakshminarayanan, B. (2019). Augmix: A simple data processing method to improve robustness and uncertainty. arXiv."},{"key":"ref_83","unstructured":"Hu, C., Shi, W., Li, C., Sun, J., Wang, D., Wu, J., and Tang, G. (2023). Impact of light and shadow on robustness of deep neural networks. arXiv."},{"key":"ref_84","doi-asserted-by":"crossref","first-page":"20707","DOI":"10.1109\/ACCESS.2017.2757765","article-title":"Robust topological navigation via convolutional neural network feature and sharpness measure","volume":"5","author":"Ma","year":"2017","journal-title":"IEEE Access"},{"key":"ref_85","doi-asserted-by":"crossref","first-page":"291","DOI":"10.1109\/TIP.2018.2867733","article-title":"Adversarial spatio-temporal learning for video deblurring","volume":"28","author":"Zhang","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_86","doi-asserted-by":"crossref","unstructured":"Qiu, Y., Zhang, K., Wang, C., Luo, W., Li, H., and Jin, Z. (2023, January 2\u20133). Mb-taylorformer: Multi-branch efficient transformer expanded by taylor formula for image dehazing. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Paris, France.","DOI":"10.1109\/ICCV51070.2023.01176"},{"key":"ref_87","doi-asserted-by":"crossref","first-page":"4541","DOI":"10.1007\/s11263-024-02056-0","article-title":"Gridformer: Residual dense transformer with grid structure for image restoration in adverse weather conditions","volume":"132","author":"Wang","year":"2024","journal-title":"Int. J. Comput. Vis."},{"key":"ref_88","doi-asserted-by":"crossref","unstructured":"Molina-Leal, A., G\u00f3mez-Espinosa, A., Escobedo Cabello, J.A., Cuan-Urquizo, E., and Cruz-Ram\u00edrez, S.R. (2021). Trajectory planning for a Mobile robot in a dynamic environment using an LSTM neural network. Appl. Sci., 11.","DOI":"10.3390\/app112210689"},{"key":"ref_89","doi-asserted-by":"crossref","unstructured":"Gao, M., Yu, R., Li, A., Morariu, V.I., and Davis, L.S. (2018, January 18\u201323). Dynamic zoom-in network for fast object detection in large images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00724"},{"key":"ref_90","doi-asserted-by":"crossref","unstructured":"Di Giammarino, L., Brizi, L., Guadagnino, T., Stachniss, C., and Grisetti, G. (2022, January 23\u201327). Md-slam: Multi-cue direct slam. Proceedings of the 2022 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan.","DOI":"10.1109\/IROS47612.2022.9981147"},{"key":"ref_91","doi-asserted-by":"crossref","first-page":"15897","DOI":"10.1109\/TITS.2023.3248089","article-title":"Lightweight real-time semantic segmentation network with efficient transformer and CNN","volume":"24","author":"Xu","year":"2023","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_92","first-page":"648","article-title":"A review on energy efficiency in autonomous mobile robots","volume":"43","author":"Wu","year":"2023","journal-title":"Robot. Intell. Autom."},{"key":"ref_93","doi-asserted-by":"crossref","unstructured":"Ke, B., Obukhov, A., Huang, S., Metzger, N., Daudt, R.C., and Schindler, K. (2024, January 16\u201322). Repurposing diffusion-based image generators for monocular depth estimation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR52733.2024.00907"}],"container-title":["Entropy"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1099-4300\/27\/6\/641\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T17:52:24Z","timestamp":1760032344000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1099-4300\/27\/6\/641"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,15]]},"references-count":93,"journal-issue":{"issue":"6","published-online":{"date-parts":[[2025,6]]}},"alternative-id":["e27060641"],"URL":"https:\/\/doi.org\/10.3390\/e27060641","relation":{},"ISSN":["1099-4300"],"issn-type":[{"type":"electronic","value":"1099-4300"}],"subject":[],"published":{"date-parts":[[2025,6,15]]}}}