{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,16]],"date-time":"2026-04-16T22:56:51Z","timestamp":1776380211294,"version":"3.51.2"},"reference-count":94,"publisher":"American Association for the Advancement of Science (AAAS)","issue":"97","content-domain":{"domain":["www.science.org"],"crossmark-restriction":true},"short-container-title":["Sci. Robot."],"published-print":{"date-parts":[[2024,12,4]]},"abstract":"<jats:p>The ability of a robot to plan complex behaviors with real-time computation, rather than adhering to predesigned or offline-learned routines, alleviates the need for specialized algorithms or training for each problem instance. Monte Carlo tree search is a powerful planning algorithm that strategically explores simulated future possibilities, but it requires a discrete problem representation that is irreconcilable with the continuous dynamics of the physical world. We present Spectral Expansion Tree Search (SETS), a real-time, tree-based planner that uses the spectrum of the locally linearized system to construct a low-complexity and approximately equivalent discrete representation of the continuous world. We prove that SETS converges to a bound of the globally optimal solution for continuous, deterministic, and differentiable Markov decision processes, a broad class of problems that includes underactuated nonlinear dynamics, nonconvex reward functions, and unstructured environments. We experimentally validated SETS on drone, spacecraft, and ground vehicle robots and one numerical experiment, each of which is not directly solvable with existing methods. We successfully show that SETS automatically discovers a diverse set of optimal behaviors and motion trajectories in real time.<\/jats:p>","DOI":"10.1126\/scirobotics.ado1010","type":"journal-article","created":{"date-parts":[[2024,12,4]],"date-time":"2024-12-04T18:58:13Z","timestamp":1733338693000},"update-policy":"https:\/\/doi.org\/10.34133\/aaas_crossmark","source":"Crossref","is-referenced-by-count":8,"title":["Monte Carlo tree search with spectral expansion for planning with dynamical systems"],"prefix":"10.1126","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4189-4090","authenticated-orcid":true,"given":"Benjamin","family":"Rivi\u00e8re","sequence":"first","affiliation":[{"name":"Division of Engineering and Applied Science, California Institute of Technology, Pasadena, CA 91125, USA."}]},{"ORCID":"https:\/\/orcid.org\/0009-0005-9621-201X","authenticated-orcid":true,"given":"John","family":"Lathrop","sequence":"additional","affiliation":[{"name":"Division of Engineering and Applied Science, California Institute of Technology, Pasadena, CA 91125, USA."}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6657-3907","authenticated-orcid":true,"given":"Soon-Jo","family":"Chung","sequence":"additional","affiliation":[{"name":"Division of Engineering and Applied Science, California Institute of Technology, Pasadena, CA 91125, USA."}]}],"member":"221","reference":[{"key":"e_1_3_2_2_2","unstructured":"R. Bellman Dynamic Programming (Princeton Univ. Press 1957)."},{"key":"e_1_3_2_3_2","unstructured":"S. LaValle \u201cRapidly-exploring random trees: A new tool for path planning \u201d Research Report 9811 (1998)."},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/70.508439"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1146\/annurev-control-061623-094742"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.2514\/1.G000218"},{"key":"e_1_3_2_7_2","unstructured":"D. Malyuta T. P. Reynolds M. Szmuk T. Lew R. Bonalli M. Pavone B. Acikmese Convex optimization for trajectory generation. arXiv:2106.09125 [math.OC] (2021)."},{"key":"e_1_3_2_8_2","unstructured":"R. S. Sutton A. G. Barto Reinforcement Learning: An Introduction (MIT Press 2018)."},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.2514\/1.G006278"},{"key":"e_1_3_2_10_2","doi-asserted-by":"crossref","unstructured":"L. P. Kaelbling T. Lozano-Perez Hierarchical task and motion planning in the now in IEEE International Conference on Robotics and Automation (IEEE 2011) pp. 1470\u20131477.","DOI":"10.1109\/ICRA.2011.5980391"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1146\/annurev-control-091420-084139"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIV.2016.2578706"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1146\/annurev-control-060117-105157"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1126\/scirobotics.adg1462"},{"key":"e_1_3_2_15_2","doi-asserted-by":"crossref","unstructured":"P. Abbeel A. Coates M. Quigley A. Ng \u201cAn application of reinforcement learning to aerobatic helicopter flight\u201d in Advances in Neural Information Processing Systems 19 B. Scholkopf J. Platt T. Hoffman Eds. (MIT Press 2007) vol. 19.","DOI":"10.7551\/mitpress\/7503.003.0006"},{"key":"e_1_3_2_16_2","doi-asserted-by":"crossref","unstructured":"I. Lenz H. Lee A. Saxena Deep learning for detecting robotic grasps in Proceedings of Robotics: Science and Systems (RSS Foundation 2013).","DOI":"10.15607\/RSS.2013.IX.012"},{"key":"e_1_3_2_17_2","doi-asserted-by":"crossref","unstructured":"G. A. Castillo B. Weng W. Zhang A. Hereid Robust feedback motion policy design using reinforcement learning on a 3D digit bipedal robot in IEEE\/RSJ International Conference on Intelligent Robots and Systems (IEEE 2021) pp. 5136\u20135143.","DOI":"10.1109\/IROS51168.2021.9636467"},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1023\/A:1017932429737"},{"key":"e_1_3_2_19_2","doi-asserted-by":"crossref","unstructured":"L. Kocsis C. Szepesvari Bandit based Monte-Carlo planning in Machine Learning: ECML 2006 17th European Conference on Machine Learning Berlin Germany September 18-22 2006 Proceedings J. F\u00fcrnkranz T. Scheffer M. Spiliopoulou Eds. vol. 4212 of Lecture Notes in Computer Science (Springer 2006) pp. 282\u2013293.","DOI":"10.1007\/11871842_29"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1561\/2200000038"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCIAIG.2012.2186810"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1287\/opre.4.1.61"},{"key":"e_1_3_2_23_2","doi-asserted-by":"crossref","unstructured":"K. J. Astrom R. M. Murray Feedback Systems: An Introduction for Scientists and Engineers (Princeton Univ. Press 2008).","DOI":"10.1515\/9781400828739"},{"key":"e_1_3_2_24_2","unstructured":"R. M. Murray Robotic Control and Nonholonomic Motion Planning (University of California 1991)."},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/TRO.2024.3475212"},{"key":"e_1_3_2_26_2","doi-asserted-by":"crossref","unstructured":"NASA \u201cAsteroid redirect mission reference concept\u201d (2015); https:\/\/nasa.gov\/wp-content\/uploads\/2015\/04\/asteroid_redirect_mission_reference_concept_description_tagged.pdf.","DOI":"10.1063\/pt.5.028742"},{"key":"e_1_3_2_27_2","doi-asserted-by":"crossref","unstructured":"R. W. Beard T. W. McLain Small Unmanned Aircraft: Theory and Practice (Princeton Univ. Press 2012).","DOI":"10.1515\/9781400840601"},{"key":"e_1_3_2_28_2","doi-asserted-by":"crossref","unstructured":"A. Couetoux J.-B. Hoock N. Sokolovska O. Teytaud N. Bonnard Continuous upper confidence trees Learning and Intelligent Optimization: 5th International Conference LION 5 Rome Italy January 17-21 2011. Selected Papers C. A. Coelho Coelho Ed. (Springer 2011) pp. 433\u2013445.","DOI":"10.1007\/978-3-642-25566-3_32"},{"key":"e_1_3_2_29_2","unstructured":"T. Howell N. Gileadi S. Tunyasuvunakool K. Zakka T. Erez Y. Tassa Predictive sampling: Real-time behaviour synthesis with MuJoCo. arXiv:2212.00541 [cs.RO] (2022)."},{"key":"e_1_3_2_30_2","doi-asserted-by":"crossref","unstructured":"M. ApS MOSEK Fusion for C++ 10.1.21 (2019).","DOI":"10.1007\/978-3-030-12354-3_3"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1126\/scirobotics.abh1221"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.actaastro.2020.01.006"},{"key":"e_1_3_2_33_2","doi-asserted-by":"crossref","unstructured":"A. Hereid E. A. Cousineau C. M. Hubicki A. D. Ames 3D dynamic walking with underactuated humanoid robots: A direct collocation framework for optimizing hybrid zero dynamics in 2016 IEEE International Conference on Robotics and Automation (ICRA) (IEEE 2016) pp. 1447\u20131454.","DOI":"10.1109\/ICRA.2016.7487279"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1177\/0278364916632065"},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1177\/0278364910383444"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1080\/00207176608921369"},{"key":"e_1_3_2_37_2","doi-asserted-by":"crossref","unstructured":"H. Bock M. Diehl D. Leineweber J. Schloder \u201cA direct multiple shooting method for real-time optimization of nonlinear DAE processes\u201d in Nonlinear Model Predictive Control F. Allg\u00f6wer A. Zheng Eds. (Springer 2000) pp. 245\u2013267.","DOI":"10.1007\/978-3-0348-8407-5_14"},{"key":"e_1_3_2_38_2","unstructured":"W. Li E. Todorov Iterative linear quadratic regulator design for nonlinear biological movement systems in First International Conference on Informatics in Control Automation and Robotics (SciTePress 2004) vol. 1 pp. 222\u2013229."},{"key":"e_1_3_2_39_2","doi-asserted-by":"crossref","unstructured":"Q. T. Dinh M. Diehl \u201cLocal convergence of sequential convex programming for nonconvex optimization\u201d in Recent Advances in Optimization and Its Applications in Engineering M. Diehl F. Glineur E. Jarlebring W. Michiels Eds. (Springer 2010) pp. 93\u2013102.","DOI":"10.1007\/978-3-642-12598-0_9"},{"key":"e_1_3_2_40_2","doi-asserted-by":"crossref","unstructured":"R. Bonalli A. Cauligi A. Bylard M. Pavone Gusto: Guaranteed sequential trajectory optimization via sequential convex programming in 2019 International Conference on Robotics and Automation (ICRA) (IEEE 2019) pp. 6741\u20136747.","DOI":"10.1109\/ICRA.2019.8794205"},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.1109\/MRA.2012.2205651"},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","DOI":"10.1126\/scirobotics.adf7843"},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.1177\/0278364911406761"},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1177\/0278364915614386"},{"key":"e_1_3_2_45_2","unstructured":"E. Poccia \u201cDeterministic sampling-based algorithms for motion planning under differential constraints \u201d thesis Pisa Univ. Pisa Italy (2017)."},{"key":"e_1_3_2_46_2","doi-asserted-by":"crossref","unstructured":"W. Honig J. O. de Haro M. Toussaint db-a*: Discontinuity-bounded search for kinodynamic mobile robot motion planning in IEEE\/RSJ International Conference on Intelligent Robots and Systems (IEEE 2022) pp. 13540\u201313547.","DOI":"10.1109\/IROS47612.2022.9981577"},{"key":"e_1_3_2_47_2","doi-asserted-by":"publisher","DOI":"10.2514\/2.4856"},{"key":"e_1_3_2_48_2","doi-asserted-by":"publisher","DOI":"10.1109\/TRO.2005.852260"},{"key":"e_1_3_2_49_2","doi-asserted-by":"publisher","DOI":"10.1177\/02783649231201196"},{"key":"e_1_3_2_50_2","doi-asserted-by":"crossref","unstructured":"E. Schmerling L. Janson M. Pavone Optimal sampling-based motion planning under differential constraints: The drift case with linear affine dynamics in 2015 54th IEEE Conference on Decision and Control (CDC) (IEEE 2015) pp. 2574\u20132581.","DOI":"10.1109\/CDC.2015.7402604"},{"key":"e_1_3_2_51_2","doi-asserted-by":"crossref","unstructured":"R. Tedrake \u201cLQR-Trees: Feedback motion planning on sparse randomized trees\u201d in Robotics: Science and Systems (RSS Foundation 2009).","DOI":"10.15607\/RSS.2009.V.003"},{"key":"e_1_3_2_52_2","doi-asserted-by":"publisher","DOI":"10.1177\/0278364917712421"},{"key":"e_1_3_2_53_2","doi-asserted-by":"crossref","unstructured":"S. Karaman E. Frazzoli Optimal kinodynamic motion planning using incremental sampling-based methods in 49th IEEE Conference on Decision and Control (CDC) (IEEE 2010) pp. 7681\u20137687.","DOI":"10.1109\/CDC.2010.5717430"},{"key":"e_1_3_2_54_2","doi-asserted-by":"crossref","unstructured":"T. M. Moerland J. Broekens A. Plaat C. M. Jonker Model-based Reinforcement Learning: A Survey (Foundations and Trends in Machine Learning 2023) vol. 1 16.","DOI":"10.1561\/2200000086"},{"key":"e_1_3_2_55_2","unstructured":"D. Auger A. Couetoux O. Teytaud Continuous upper confidence trees with polynomial exploration\u2013consistency in Machine Learning and Knowledge Discovery in Databases: European Conference ECML PKDD 2013 Prague Czech Republic September 23-27 2013 Proceedings Part I H. Blockeel K. Kersting S. Nijssen F. \u017delezn\u00fd Eds. (Springer 2013) pp. 194\u2013209."},{"key":"e_1_3_2_56_2","unstructured":"D. Silver J. Veness Monte-Carlo planning in large POMDPS in Advances in Neural Information Processing Systems 23 J. Lafferty C. Williams J. Shawe-Taylor R. Zemel A. Culotta Eds. (Curran Associates 2010) vol. 23."},{"key":"e_1_3_2_57_2","doi-asserted-by":"crossref","unstructured":"Z. Sunberg M. Kochenderfer Online algorithms for POMDPS with continuous state action and observation spaces in Proceedings of the International Conference on Automated Planning and Scheduling (AAAI 2018) vol. 28 pp. 259\u2013263.","DOI":"10.1609\/icaps.v28i1.13882"},{"key":"e_1_3_2_58_2","doi-asserted-by":"crossref","unstructured":"J. Ragan B. Riviere S.-J. Chung Bayesian active sensing for fault estimation with belief space tree search paper presented at AIAA SCITECH 2023 Forum National Harbor MD 23 to 27 January 2023.","DOI":"10.2514\/6.2023-0874"},{"key":"e_1_3_2_59_2","unstructured":"V. Lisy V. Kovarik M. Lanctot B. Bosansky Convergence of Monte Carlo tree search in simultaneous move games in Advances in Neural Information Processing Systems 26 C. J. Burges L. Bottou M. Welling Z. Ghahramani K. Q. Weinberger Eds. (Curran Associates 2013) vol. 26."},{"key":"e_1_3_2_60_2","doi-asserted-by":"publisher","DOI":"10.1177\/0278364912444543"},{"key":"e_1_3_2_61_2","doi-asserted-by":"crossref","unstructured":"G. Williams P. Drews B. Goldfain J. M. Rehg E. A. Theodorou Aggressive driving with model predictive path integral control in 2016 IEEE International Conference on Robotics and Automation (ICRA) (IEEE 2016) pp. 1433\u20131440.","DOI":"10.1109\/ICRA.2016.7487277"},{"key":"e_1_3_2_62_2","unstructured":"M. Bhardwaj B. Sundaralingam A. Mousavian N. D. Ratliff D. Fox F. Ramos B. Boots Storm: An integrated framework for fast joint-space model-predictive control for reactive manipulation in Conference on Robot Learning (MLResearchPress 2022) pp. 750\u2013759."},{"key":"e_1_3_2_63_2","unstructured":"C. Pezzato C. Salmi M. Spahn E. Trevisan J. Alonso-Mora C. H. Corbato Sampling-based model predictive control leveraging parallelizable physics simulations. arXiv:2307.09105 [cs.RO] (2023)."},{"key":"e_1_3_2_64_2","unstructured":"N. P. Garg D. Hsu W. S. Lee \u201cDespot-alpha: Online pomdp planning with large state and observation spaces\u201d in Robotics: Science and Systems (RSS Foundation 2019) vol. 3 pp. 3\u20132."},{"key":"e_1_3_2_65_2","doi-asserted-by":"publisher","DOI":"10.1109\/TRO.2022.3197072"},{"key":"e_1_3_2_66_2","doi-asserted-by":"crossref","unstructured":"R. S. Sutton Planning by incremental dynamic programming in Machine Learning Proceedings (Elsevier 1991) pp. 353\u2013357.","DOI":"10.1016\/B978-1-55860-200-7.50073-8"},{"key":"e_1_3_2_67_2","doi-asserted-by":"publisher","DOI":"10.1109\/9.133184"},{"key":"e_1_3_2_68_2","doi-asserted-by":"publisher","DOI":"10.1023\/A:1017992615625"},{"key":"e_1_3_2_69_2","doi-asserted-by":"publisher","DOI":"10.1177\/0278364917753994"},{"key":"e_1_3_2_70_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10846-017-0468-y"},{"key":"e_1_3_2_71_2","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2017.2743240"},{"key":"e_1_3_2_72_2","unstructured":"J. Schulman F. Wolski P. Dhariwal A. Radford O. Klimov Proximal policy optimization algorithms. arXiv:1707.06347 [cs.LG] (2017)."},{"key":"e_1_3_2_73_2","unstructured":"M. Fazel R. Ge S. Kakade M. Mesbahi Global convergence of policy gradient methods for the linear quadratic regulator in International Conference on Machine Learning (MLResearchPress 2018) pp. 1467\u20131476."},{"key":"e_1_3_2_74_2","doi-asserted-by":"publisher","DOI":"10.1137\/19M1288012"},{"key":"e_1_3_2_75_2","doi-asserted-by":"publisher","DOI":"10.1287\/opre.2021.0014"},{"key":"e_1_3_2_76_2","doi-asserted-by":"crossref","unstructured":"E. Schmerling K. Leung W. Vollprecht M. Pavone Multimodal probabilistic model-based planning for human-robot interaction in 2018 IEEE International Conference on Robotics and Automation (ICRA) (IEEE 2018) pp. 3399\u20133406.","DOI":"10.1109\/ICRA.2018.8460766"},{"key":"e_1_3_2_77_2","doi-asserted-by":"crossref","unstructured":"B. Kim K. Lee S. Lim L. Kaelbling T. Lozano-Perez Monte Carlo tree search in continuous spaces using Voronoi optimistic optimization with regret bounds in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI 2020) vol. 34 pp. 9916\u20139924.","DOI":"10.1609\/aaai.v34i06.6546"},{"key":"e_1_3_2_78_2","doi-asserted-by":"publisher","DOI":"10.2514\/1.G001921"},{"key":"e_1_3_2_79_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0004-3702(99)00052-1"},{"key":"e_1_3_2_80_2","doi-asserted-by":"crossref","unstructured":"Y. Lee P. Cai D. Hsu Magic: Learning macro-actions for online POMDP planning in Proceedings of Robotics: Science and Systems (RSS Foundation 2021).","DOI":"10.15607\/RSS.2021.XVII.041"},{"key":"e_1_3_2_81_2","unstructured":"A. Bai S. Srivastava S. Russell Markovian state and action abstractions for MDPS via hierarchical MCTS in Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence S. Kambhampati Ed. (IJCAI\/AAAI Press 2016) pp. 3029\u20133039."},{"key":"e_1_3_2_82_2","doi-asserted-by":"crossref","unstructured":"M. De Waard D. M. Roijers S. C. Bakkes Monte Carlo tree search with options for general video game playing in 2016 IEEE Conference on Computational Intelligence and Games (CIG) (IEEE 2016) pp. 1\u20138.","DOI":"10.1109\/CIG.2016.7860383"},{"key":"e_1_3_2_83_2","doi-asserted-by":"crossref","unstructured":"A. Jamgochian H. Buurmeijer K. H. Wray A. Corso M. J. Kochenderfer Constrained hierarchical Monte Carlo belief-state planning. arXiv:2310.20054 [cs.AI] (2023).","DOI":"10.1109\/ICRA57147.2024.10611223"},{"key":"e_1_3_2_84_2","unstructured":"R. S. Sutton D. McAllester S. Singh Y. Mansour Policy gradient methods for reinforcement learning with function approximation in Advances in Neural Information Processing Systems 12 S. Solla T. Leen K. M\u00fcller Eds. (MIT Press 1999)."},{"key":"e_1_3_2_85_2","unstructured":"M. Deisenroth C. E. Rasmussen Pilco: A model-based and data-efficient approach to policy search in Proceedings of the 28th International Conference on Machine Learning (ICML11) (Association for Computing Machinery 2011) pp. 465\u2013472."},{"key":"e_1_3_2_86_2","unstructured":"S. Levine V. Koltun Guided policy search in International Conference on Machine Learning (MLResearchPress 2013) pp. 1\u20139."},{"key":"e_1_3_2_87_2","doi-asserted-by":"publisher","DOI":"10.1109\/LRA.2020.2994035"},{"key":"e_1_3_2_88_2","unstructured":"M. L. Puterman Markov Decision Processes: Discrete Stochastic Dynamic Programming (John Wiley & Sons 2014)."},{"key":"e_1_3_2_89_2","doi-asserted-by":"crossref","unstructured":"D. Shah Q. Xie Z. Xu Non-asymptotic analysis of Monte Carlo tree search in Abstracts of the 2020 SIGMETRICS\/Performance Joint International Conference on Measurement and Modeling of Computer Systems (Association for Computing Machinery 2020) pp. 31\u201332.","DOI":"10.1145\/3393691.3394202"},{"key":"e_1_3_2_90_2","doi-asserted-by":"publisher","DOI":"10.1038\/nature24270"},{"key":"e_1_3_2_91_2","doi-asserted-by":"crossref","unstructured":"S. Boyd L. El Ghaoui E. Feron V. Balakrishnan Linear Matrix Inequalities in System and Control Theory (SIAM 1994).","DOI":"10.1137\/1.9781611970777"},{"key":"e_1_3_2_92_2","unstructured":"K. Zhou J. C. Doyle Essentials of Robust Control (Prentice Hall 1998) vol. 104."},{"key":"e_1_3_2_93_2","unstructured":"R. Tedrake Drake Development Team Drake: Model-based design and verification for robotics (2019); https:\/\/drake.mit.edu."},{"key":"e_1_3_2_94_2","doi-asserted-by":"publisher","DOI":"10.1109\/LRA.2021.3096758"},{"key":"e_1_3_2_95_2","doi-asserted-by":"publisher","DOI":"10.1126\/scirobotics.adn4722"}],"container-title":["Science Robotics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.science.org\/doi\/pdf\/10.1126\/scirobotics.ado1010","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,12,4]],"date-time":"2024-12-04T18:58:30Z","timestamp":1733338710000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.science.org\/doi\/10.1126\/scirobotics.ado1010"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,12,4]]},"references-count":94,"journal-issue":{"issue":"97","published-print":{"date-parts":[[2024,12,4]]}},"alternative-id":["10.1126\/scirobotics.ado1010"],"URL":"https:\/\/doi.org\/10.1126\/scirobotics.ado1010","relation":{},"ISSN":["2470-9476"],"issn-type":[{"value":"2470-9476","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,12,4]]},"assertion":[{"value":"2024-01-18","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-11-06","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-12-04","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"eado1010"}}