{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,8]],"date-time":"2026-04-08T07:10:01Z","timestamp":1775632201887,"version":"3.50.1"},"reference-count":63,"publisher":"American Association for the Advancement of Science (AAAS)","issue":"92","content-domain":{"domain":["www.science.org"],"crossmark-restriction":true},"short-container-title":["Sci. Robot."],"published-print":{"date-parts":[[2024,7,17]]},"abstract":"<jats:p>Navigation is an essential capability for autonomous robots. In particular, visual navigation has been a major research topic in robotics because cameras are lightweight, power-efficient sensors that provide rich information on the environment. However, the main challenge of visual navigation is that it requires substantial computational power and memory for visual processing and storage of the results. As of yet, this has precluded its use on small, extremely resource-constrained robots such as lightweight drones. Inspired by the parsimony of natural intelligence, we propose an insect-inspired approach toward visual navigation that is specifically aimed at extremely resource-restricted robots. It is a route-following approach in which a robot\u2019s outbound trajectory is stored as a collection of highly compressed panoramic images together with their spatial relationships as measured with odometry. During the inbound journey, the robot uses a combination of odometry and visual homing to return to the stored locations, with visual homing preventing the buildup of odometric drift. A main advancement of the proposed strategy is that the number of stored compressed images is minimized by spacing them apart as far as the accuracy of odometry allows. To demonstrate the suitability for small systems, we implemented the strategy on a tiny 56-gram drone. The drone could successfully follow routes up to 100 meters with a trajectory representation that consumed less than 20 bytes per meter. The presented method forms a substantial step toward the autonomous visual navigation of tiny robots, facilitating their more widespread application.<\/jats:p>","DOI":"10.1126\/scirobotics.adk0310","type":"journal-article","created":{"date-parts":[[2024,7,17]],"date-time":"2024-07-17T17:58:13Z","timestamp":1721239093000},"update-policy":"https:\/\/doi.org\/10.34133\/aaas_crossmark","source":"Crossref","is-referenced-by-count":20,"title":["Visual route following for tiny autonomous robots"],"prefix":"10.1126","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0772-3821","authenticated-orcid":true,"given":"Tom","family":"van Dijk","sequence":"first","affiliation":[{"name":"Control and Operations Department, Faculty of Aerospace Engineering, Delft University of Technology, Delft, Netherlands."}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6795-8454","authenticated-orcid":true,"given":"Christophe","family":"De Wagter","sequence":"additional","affiliation":[{"name":"Control and Operations Department, Faculty of Aerospace Engineering, Delft University of Technology, Delft, Netherlands."}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8265-1496","authenticated-orcid":true,"given":"Guido C. H. E.","family":"de Croon","sequence":"additional","affiliation":[{"name":"Control and Operations Department, Faculty of Aerospace Engineering, Delft University of Technology, Delft, Netherlands."}]}],"member":"221","reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"crossref","unstructured":"K. P. Valavanis G. J. Vachtsevanos Eds. Handbook of Unmanned Aerial Vehicles (Springer Netherlands 2015).","DOI":"10.1007\/978-90-481-9707-1"},{"key":"e_1_3_2_3_2","doi-asserted-by":"crossref","unstructured":"M. W. Mueller M. Hamer R. D\u2019Andrea Fusing ultra-wideband range measurements with accelerometers and rate gyroscopes for quadrocopter state estimation in 2015 IEEE International Conference on Robotics and Automation (ICRA) (IEEE 2015) pp. 1730\u20131736.","DOI":"10.1109\/ICRA.2015.7139421"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/LRA.2019.2891086"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.3390\/electronics9050741"},{"key":"e_1_3_2_6_2","doi-asserted-by":"crossref","unstructured":"S. Balasubramanian Y. M. Chukewad J. M. James G. L. Barrows S. B. Fuller An insect-sized robot that uses a custom-built onboard camera and a neural network to classify and respond to visual input in 2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob) (IEEE 2018) pp. 1297\u20131302.","DOI":"10.1109\/BIOROB.2018.8488007"},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.3390\/s141121702"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/MRA.2006.1638022"},{"key":"e_1_3_2_9_2","doi-asserted-by":"crossref","unstructured":"B. Bodin H. Wagstaff S. Saecdi L. Nardi E. Vespa J. Mawer A. Nisbet M. Lujan S. Furber A. J. Davison P. H. J. Kelly M. F. P. O\u2019Boyle SLAMBench2: Multi-objective head-to-head benchmarking for visual SLAM in 2018 IEEE International Conference on Robotics and Automation (ICRA) (IEEE 2018) pp. 3637\u20133644.","DOI":"10.1109\/ICRA.2018.8460558"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.1017\/S0263574713001070"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.robot.2014.11.009"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/LRA.2022.3149030"},{"key":"e_1_3_2_13_2","doi-asserted-by":"crossref","unstructured":"B. Ferrarini M. Milford K. D. McDonald-Maier S. Ehsan Highly-efficient binary neural networks for visual place recognition in 2022 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE 2022) pp. 5493\u20135500.","DOI":"10.1109\/IROS47612.2022.9981978"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSSC.2018.2886342"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1098\/rsta.2019.0061"},{"key":"e_1_3_2_16_2","unstructured":"Y. Sun A. M. Kist Deep learning on edge TPUs. arXiv:2108.13732 [cs.CV] (2021)."},{"key":"e_1_3_2_17_2","doi-asserted-by":"crossref","unstructured":"N. Tiwari K. Mondal NCS based ultra low power optimized machine learning techniques for image classification in 2019 IEEE Region 10 Symposium (TENSYMP) (IEEE 2019) pp. 750\u2013753.","DOI":"10.1109\/TENSYMP46218.2019.8971238"},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1002\/rob.21956"},{"key":"e_1_3_2_19_2","doi-asserted-by":"crossref","unstructured":"R. Wehner M. V. Srinivasan Path integration in insects in The Neurobiology of Spatial Behaviour (Oxford Univ. Press 2003) pp. 9\u201330.","DOI":"10.1093\/acprof:oso\/9780198515241.003.0001"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1242\/jeb.188094"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1007\/BF00243395"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1126\/science.aal4835"},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cub.2017.08.052"},{"key":"e_1_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.1038\/nature22343"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1038\/295560a0"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pcbi.1002336"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pcbi.1007631"},{"key":"e_1_3_2_28_2","doi-asserted-by":"crossref","unstructured":"A. Denuelle M. V. Srinivasan Bio-inspired visual guidance: From insect homing to UAS navigation in 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO) (IEEE 2015) pp. 326\u2013332.","DOI":"10.1109\/ROBIO.2015.7418788"},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1126\/scirobotics.aau0307"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1177\/105971239700600104"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1007\/s004220050470"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.robot.2005.12.001"},{"key":"e_1_3_2_33_2","doi-asserted-by":"crossref","unstructured":"A. Denuelle M. V. Srinivasan A sparse snapshot-based navigation strategy for UAS guidance in natural environments in 2016 IEEE International Conference on Robotics and Automation (ICRA) (IEEE 2016) pp. 3455\u20133462.","DOI":"10.1109\/ICRA.2016.7487524"},{"key":"e_1_3_2_34_2","doi-asserted-by":"crossref","unstructured":"A. Vardy Long-range visual homing in 2006 IEEE International Conference on Robotics and Biomimetics (IEEE 2006) pp. 220\u2013226.","DOI":"10.1109\/ROBIO.2006.340381"},{"key":"e_1_3_2_35_2","doi-asserted-by":"crossref","unstructured":"S. Raj P. R. Giordano F. Chaumette Appearance-based indoor navigation by IBVS using mutual information in 2016 14th International Conference on Control Automation Robotics and Vision (ICARCV) (IEEE 2016) pp. 1\u20136.","DOI":"10.1109\/ICARCV.2016.7838760"},{"key":"e_1_3_2_36_2","unstructured":"J. Courbon G. Blanc Y. Mezouar P. Martinet Navigation of a non-holonomic mobile robot with a memory of omnidirectional images in 2007 ICRA 2007 Workshop: Planning Perception and Navigation for Intelligent Vehicles (IEEE 2007)."},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1002\/rob.20354"},{"key":"e_1_3_2_38_2","doi-asserted-by":"crossref","unstructured":"D. Dall\u2019Osto T. Fischer M. Milford Fast and robust bio-inspired teach and repeat navigation in 2021 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE 2021) pp. 500\u2013507.","DOI":"10.1109\/IROS51168.2021.9636334"},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1177\/0278364908098412"},{"key":"e_1_3_2_40_2","doi-asserted-by":"crossref","unstructured":"T. Krajnik F. Majer L. Halodova T. Vintr Navigation without localisation: Reliable teach and repeat based on the convergence theorem in 2018 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE 2018) pp. 1657\u20131664.","DOI":"10.1109\/IROS.2018.8593803"},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.robot.2007.05.004"},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","DOI":"10.1023\/A:1008821210922"},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/MRA.2006.250573"},{"key":"e_1_3_2_44_2","doi-asserted-by":"crossref","unstructured":"M. Liu C. Pradalier Qijun Chen R. Siegwart A bearing-only 2D\/3D-homing method under a visual servoing framework in 2010 IEEE International Conference on Robotics and Automation (IEEE 2010) pp. 4062\u20134067.","DOI":"10.1109\/ROBOT.2010.5509441"},{"key":"e_1_3_2_45_2","doi-asserted-by":"crossref","unstructured":"F. Chaumette S. Hutchinson P. Corke \u201cVisual servoing\u201d in Springer Handbook of Robotics B. Siciliano O. Khatib Eds. (Springer 2016) pp. 841\u2013866.","DOI":"10.1007\/978-3-319-32552-1_34"},{"key":"e_1_3_2_46_2","doi-asserted-by":"crossref","unstructured":"D. G. Lowe Object recognition from local scale-invariant features in Proceedings of the Seventh IEEE International Conference on Computer Vision (IEEE 1999) vol. 2 pp. 1150\u20131157.","DOI":"10.1109\/ICCV.1999.790410"},{"key":"e_1_3_2_47_2","doi-asserted-by":"crossref","unstructured":"S. Leutenegger M. Chli R. Y. Siegwart BRISK: Binary robust invariant scalable keypoints in 2011 International Conference on Computer Vision (IEEE 2011) pp. 2548\u20132555.","DOI":"10.1109\/ICCV.2011.6126542"},{"key":"e_1_3_2_48_2","doi-asserted-by":"publisher","DOI":"10.1177\/0278364918761115"},{"key":"e_1_3_2_49_2","doi-asserted-by":"crossref","unstructured":"J. Zeil N. Boeddeker W. St\u00fcrzl \"Visual homing in insects and robots\" in Flying Insects and Robots (Springer 2009) pp. 87\u2013100.","DOI":"10.1007\/978-3-540-89393-6_7"},{"key":"e_1_3_2_50_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00422-006-0095-3"},{"key":"e_1_3_2_51_2","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0153706"},{"key":"e_1_3_2_52_2","doi-asserted-by":"crossref","unstructured":"S. Edvani E. Va S. Bettendorf S. Edvani E. Va S. Bettendorf SIMONA\u2013A reconfigurable and versatile research facility in Modeling and Simulation Technologies Conference (American Institute of Aeronautics and Astronautics 1997) p. 3809.","DOI":"10.2514\/6.1997-3809"},{"key":"e_1_3_2_53_2","doi-asserted-by":"crossref","unstructured":"S. Shah D. Dey C. Lovett A. Kapoor AirSim: High-fidelity visual and physical simulation for autonomous vehicles in Field and Service Robotics: Results of the 11th International Conference M. Hutter R. Siegwart Eds. vol. 5 of Springer Proceedings in Advanced Robotics (Springer 2018) pp. 621\u2013635.","DOI":"10.1007\/978-3-319-67361-5_40"},{"key":"e_1_3_2_54_2","doi-asserted-by":"publisher","DOI":"10.1002\/rob.20342"},{"key":"e_1_3_2_55_2","doi-asserted-by":"crossref","unstructured":"M. A. Khan F. Labrosse Visual topological mapping using an appearance-based location selection method in Towards Autonomous Robotic Systems: 21st Annual Conference TAROS 2020 Nottingham UK September 16 2020 Proceedings A. Mohammad X. Dong M. Russo Eds. vol. 12228 of Lecture Notes in Computer Science (Springer 2020) pp. 90\u2013102.","DOI":"10.1007\/978-3-030-63486-5_12"},{"key":"e_1_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10071-020-01383-2"},{"key":"e_1_3_2_57_2","doi-asserted-by":"publisher","DOI":"10.7554\/eLife.54026"},{"key":"e_1_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.7554\/eLife.73077"},{"key":"e_1_3_2_59_2","doi-asserted-by":"crossref","unstructured":"S. Zingg D. Scaramuzza S. Weiss R. Siegwart MAV navigation through indoor corridors using optical flow in 2010 IEEE International Conference on Robotics and Automation (IEEE 2010) pp. 3361\u20133368.","DOI":"10.1109\/ROBOT.2010.5509777"},{"key":"e_1_3_2_60_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.asd.2017.06.003"},{"key":"e_1_3_2_61_2","doi-asserted-by":"publisher","DOI":"10.1126\/science.1231806"},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10514-007-9043-x"},{"key":"e_1_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0146-664X(72)80017-0"},{"key":"e_1_3_2_64_2","first-page":"112","article-title":"Algorithms for the reduction of the number of points required to represent a digitized line or its caricature","volume":"10","author":"Douglas D. H.","year":"1973","unstructured":"D. H. Douglas, T. K. Peucker, Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartogr. Int. J. Geogr. Inf. Geovisualization 10, 112\u2013122 (1973).","journal-title":"Cartogr. Int. J. Geogr. Inf. Geovisualization"}],"container-title":["Science Robotics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.science.org\/doi\/pdf\/10.1126\/scirobotics.adk0310","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,11,24]],"date-time":"2024-11-24T08:47:17Z","timestamp":1732438037000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.science.org\/doi\/10.1126\/scirobotics.adk0310"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,7,17]]},"references-count":63,"journal-issue":{"issue":"92","published-print":{"date-parts":[[2024,7,17]]}},"alternative-id":["10.1126\/scirobotics.adk0310"],"URL":"https:\/\/doi.org\/10.1126\/scirobotics.adk0310","relation":{},"ISSN":["2470-9476"],"issn-type":[{"value":"2470-9476","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,7,17]]},"assertion":[{"value":"2023-08-04","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-06-14","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-07-17","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"eadk0310"}}