{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,26]],"date-time":"2026-02-26T15:29:45Z","timestamp":1772119785526,"version":"3.50.1"},"reference-count":66,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2025,9,1]],"date-time":"2025-09-01T00:00:00Z","timestamp":1756684800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,9,15]],"date-time":"2025-09-15T00:00:00Z","timestamp":1757894400000},"content-version":"vor","delay-in-days":14,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100000147","name":"Division of Civil, Mechanical and Manufacturing Innovation","doi-asserted-by":"publisher","award":["2205241"],"award-info":[{"award-number":["2205241"]}],"id":[{"id":"10.13039\/100000147","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Auton Robot"],"published-print":{"date-parts":[[2025,9]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Robots should learn new tasks from humans. But how do humans convey what they want the robot to do? Existing methods largely rely on humans physically guiding the robot arm throughout their intended task. Unfortunately \u2014 as we scale up the amount of data \u2014 physical guidance becomes prohibitively burdensome. Not only do humans need to operate robot hardware but also modify the environment (e.g., moving and resetting objects) to provide multiple task examples. In this work we propose L2D2, a sketching interface and imitation learning algorithm where humans can provide demonstrations by <jats:italic>drawing<\/jats:italic> the task. L2D2 starts with a single image of the robot arm and its workspace. Using a tablet, users draw and label trajectories on this image to illustrate how the robot should act. To collect new and diverse demonstrations, we no longer need the human to physically reset the workspace; instead, L2D2 leverages vision-language segmentation to autonomously vary object locations and generate synthetic images for the human to draw upon. We recognize that drawing trajectories is not as information-rich as physically demonstrating the task. Drawings are 2-dimensional and do not capture how the robot\u2019s actions affect its environment. To address these fundamental challenges the next stage of L2D2 grounds the human\u2019s static, 2D drawings in our dynamic, 3D world by leveraging a small set of physical demonstrations. Our experiments and user study suggest that L2D2 enables humans to provide more demonstrations with less time and effort than traditional approaches, and users prefer drawings over physical manipulation. When compared to other drawing-based approaches, we find that L2D2 learns more performant robot policies, requires a smaller dataset, and can generalize to longer-horizon tasks. See our project website: <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/collab.me.vt.edu\/L2D2\/\" ext-link-type=\"uri\">https:\/\/collab.me.vt.edu\/L2D2\/<\/jats:ext-link>\n          <\/jats:p>","DOI":"10.1007\/s10514-025-10210-x","type":"journal-article","created":{"date-parts":[[2025,9,15]],"date-time":"2025-09-15T14:31:04Z","timestamp":1757946664000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["L2D2: Robot Learning from 2D drawings"],"prefix":"10.1007","volume":"49","author":[{"given":"Shaunak A.","family":"Mehta","sequence":"first","affiliation":[]},{"given":"Heramb","family":"Nemlekar","sequence":"additional","affiliation":[]},{"given":"Hari","family":"Sumant","sequence":"additional","affiliation":[]},{"given":"Dylan P.","family":"Losey","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,9,15]]},"reference":[{"key":"10210_CR1","doi-asserted-by":"crossref","unstructured":"Alakuijala, M., Dulac-Arnold, G., Mairal, J., Ponce, J., & Schmid, C. (2023). Learning reward functions for robotic manipulation by observing humans. In: IEEE International Conference on Robotics and Automation, pp. 5006\u20135012.","DOI":"10.1109\/ICRA48891.2023.10161178"},{"issue":"5","key":"10210_CR2","doi-asserted-by":"publisher","first-page":"469","DOI":"10.1016\/j.robot.2008.10.024","volume":"57","author":"BD Argall","year":"2009","unstructured":"Argall, B. D., Chernova, S., Veloso, M., & Browning, B. (2009). A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5), 469\u2013483.","journal-title":"Robotics and Autonomous Systems"},{"key":"10210_CR3","doi-asserted-by":"crossref","unstructured":"Bahadur, N., Lewandowski, B., & Paffenroth, R. (2022).Dimension estimation using autoencoders and application. Deep Learning Applications pp. 95\u2013121.","DOI":"10.1007\/978-981-16-3357-7_4"},{"key":"10210_CR4","doi-asserted-by":"crossref","unstructured":"Bahl, S., Gupta, A., Pathak, D. (2022) Human-to-robot imitation in the wild. In: Robotics: Science and Systems.","DOI":"10.15607\/RSS.2022.XVIII.026"},{"key":"10210_CR5","unstructured":"Belkhale, S., Cui, Y., & Sadigh, D. (2023). Data quality in imitation learning. Advances in Neural Information Processing Systems pp. 80375\u201380395."},{"key":"10210_CR6","unstructured":"Bharadhwaj, H., Dwibedi, D., Gupta, A., Tulsiani, S., Doersch, C., Xiao, T., Shah, D., Xia, F., Sadigh, D., & Kirmani, S.(2024) Gen2Act: Human video generation in novel scenarios enables generalizable robot manipulation. In: 1st Workshop on X-Embodiment Robot Learning."},{"key":"10210_CR7","doi-asserted-by":"crossref","unstructured":"Bharadhwaj, H., Mottaghi, R., Gupta, A., & Tulsiani, S. (2024). Track2Act: Predicting point tracks from internet videos enables generalizable robot manipulation. In: European Conference on Computer Vision, pp. 306\u2013324.","DOI":"10.1007\/978-3-031-73116-7_18"},{"key":"10210_CR8","doi-asserted-by":"crossref","unstructured":"Brohan, A., Brown, N., Carbajal, J., Chebotar, Y., Dabis, J., Finn, C., Gopalakrishnan, K., Hausman, K., Herzog, A., Hsu, J., & Ibarz, J.(2022). RT-1: Robotics transformer for real-world control at scale. arXiv preprint arXiv:2212.06817.","DOI":"10.15607\/RSS.2023.XIX.025"},{"key":"10210_CR9","doi-asserted-by":"crossref","unstructured":"Cacciarelli, D., Kulahci, M. (2023). Hidden dimensions of the data: PCA vs autoencoders. Quality Engineering pp. 741\u2013750.","DOI":"10.1080\/08982112.2023.2231064"},{"key":"10210_CR10","unstructured":"Dasari, S., Gupta, A. (2021).Transformers for one-shot visual imitation. In: Conference on Robot Learning, pp. 2071\u20132084."},{"key":"10210_CR11","unstructured":"Duan, J., Wang, Y.R., Shridhar, M., Fox, D., & Krishna, R.: Ar2-d2: Training a robot without a robot. In: 7th Annual Conference on Robot Learning."},{"key":"10210_CR12","unstructured":"Duan, Y., Andrychowicz, M., Stadie, B., Jonathan\u00a0Ho, O., Schneider, J., Sutskever, I., Abbeel, P., & Zaremba, W. (2017). One-shot imitation learning. Advances in Neural Information Processing Systems (NeurIPS) 30."},{"key":"10210_CR13","doi-asserted-by":"crossref","unstructured":"Ebert, F., Yang, Y., Schmeckpeper, K., Bucher, B., Georgakis, G., Daniilidis, K., Finn, C., & Levine, S. (2022). Bridge data: Boosting generalization of robotic skills with cross-domain datasets. In: Robotics: Science and Systems (RSS).","DOI":"10.15607\/RSS.2022.XVIII.063"},{"key":"10210_CR14","doi-asserted-by":"crossref","unstructured":"Fang, H., Fang, H.S., Wang, Y., Ren, J., Chen, J., Zhang, R., Wang, W., & Lu, C.(2024). AirExo: Low-cost exoskeletons for learning whole-arm manipulation in the wild. In: IEEE International Conference on Robotics and Automation, pp. 15031\u201315038 (2024).","DOI":"10.1109\/ICRA57147.2024.10610799"},{"key":"10210_CR15","doi-asserted-by":"crossref","unstructured":"Fournier, Q., & Aloise, D. (2019). Empirical comparison between autoencoders and traditional dimensionality reduction methods. In: IEEE International Conference on Artificial Intelligence and Knowledge Engineering, pp. 211\u2013214.","DOI":"10.1109\/AIKE.2019.00044"},{"key":"10210_CR16","unstructured":"Fu, Z., Zhao, T.Z., & Finn, C. (2024). Mobile ALOHA: Learning bimanual mobile manipulation using low-cost whole-body teleoperation. In: Conference on Robot Learning."},{"key":"10210_CR17","doi-asserted-by":"crossref","unstructured":"Gao, T., Nasiriany, S., Liu, H., Yang, Q., & Zhu, Y. (2024). Prime: Scaffolding manipulation tasks with behavior primitives for data-efficient imitation learning. IEEE Robotics and Automation Letters.","DOI":"10.1109\/LRA.2024.3443610"},{"key":"10210_CR18","doi-asserted-by":"publisher","DOI":"10.1016\/j.softx.2025.102054","volume":"29","author":"A George","year":"2025","unstructured":"George, A., Bartsch, A., & Farimani, A. B. (2025). Openvr: Teleoperation for manipulation. SoftwareX, 29, Article 102054.","journal-title":"SoftwareX"},{"key":"10210_CR19","unstructured":"Gu, J., Kirmani, S., Wohlhart, P., Lu, Y., Arenas, M.G., Rao, K., Yu, W., Fu, C., Gopalakrishnan, K., Xu, Z., Sundaresan, P., Xu, P., Su, H., Hausman, K., Finn, C., Vuong, Q., & Xiao, T.(2024).RT-trajectory: Robotic task generalization via hindsight trajectory sketches. In: International Conference on Learning Representations."},{"key":"10210_CR20","doi-asserted-by":"crossref","unstructured":"Hartley, R., Zisserman, A. (2003).Multiple view geometry in computer vision. Cambridge University Press .","DOI":"10.1017\/CBO9780511811685"},{"key":"10210_CR21","doi-asserted-by":"crossref","unstructured":"Hoque, R., Mandlekar, A., Garrett, C.R., Goldberg, K., & Fox, D. (2023).Interventional data generation for robust and data-efficient robot imitation learning. In: First Workshop on Out-of-Distribution Generalization in Robotics at CoRL 2023.","DOI":"10.1109\/IROS58592.2024.10801523"},{"key":"10210_CR22","unstructured":"Iyer, A., Peng, Z., Dai, Y., Guzey, I., Haldar, S., Chintala, S., & Pinto, L.(2024). OPEN TEACH: A versatile teleoperation system for robotic manipulation. In: Conference on Robot Learning."},{"key":"10210_CR23","doi-asserted-by":"crossref","unstructured":"Jain, V., Attarian, M., Joshi, N.J., Wahid, A., Driess, D., Vuong, Q., Sanketi, P.R., Sermanet, P., Welker, S., Chan, C., & Gilitschenski, I.(2024). Vid2Robot: End-to-end video-conditioned policy learning with cross-attention transformers. In: Robotics: Science and Systems (RSS) .","DOI":"10.15607\/RSS.2024.XX.052"},{"key":"10210_CR24","unstructured":"James, S., Bloesch, M., & Davison, A.J. (2018).Task-embedded control networks for few-shot imitation learning. In: Conference on Robot Learning, pp. 783\u2013795."},{"key":"10210_CR25","unstructured":"Jang, E., Irpan, A., Khansari, M., Kappler, D., Ebert, F., Lynch, C., Levine, S., & Finn, C.(2022). BC-Z: Zero-shot task generalization with robotic imitation learning. In: Conference on Robot Learning, pp. 991\u20131002."},{"key":"10210_CR26","doi-asserted-by":"crossref","unstructured":"Jolliffe, I.T., Cadima, J.(2016) Principal component analysis: A review and recent developments. Philosophical transactions of the royal society A: Mathematical, Physical and Engineering Sciences.","DOI":"10.1098\/rsta.2015.0202"},{"key":"10210_CR27","doi-asserted-by":"crossref","unstructured":"Jonnavittula, A., Mehta, S.A., & Losey, D.P. (2024). SARI: Shared autonomy across repeated interaction. ACM Transactions on Human-Robot Interaction .","DOI":"10.1145\/3651994"},{"issue":"1","key":"10210_CR28","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s10514-024-10188-y","volume":"49","author":"A Jonnavittula","year":"2025","unstructured":"Jonnavittula, A., Parekh, S., & Plosey, D. (2025). VIEW: Visual imitation learning with waypoints. Autonomous Robots, 49(1), 1\u201326.","journal-title":"Autonomous Robots"},{"issue":"3","key":"10210_CR29","doi-asserted-by":"publisher","first-page":"396","DOI":"10.1007\/s11633-022-1346-z","volume":"20","author":"LH Kong","year":"2023","unstructured":"Kong, L. H., He, W., Chen, W. S., Zhang, H., & Wang, Y. N. (2023). Dynamic movement primitives based robot skills learning. Machine Intelligence Research, 20(3), 396\u2013407.","journal-title":"Machine Intelligence Research"},{"key":"10210_CR30","unstructured":"Laskey, M., Lee, J., Fox, R., Dragan, A., & Goldberg, K. (2017.) DART: Noise injection for robust imitation learning. In: Conference on Robot Learning."},{"key":"10210_CR31","unstructured":"Liu, B., Zhu, Y., Gao, C., Feng, Y., Liu, Q., Zhu, Y., & Stone, P. (2024).LIBERO: Benchmarking knowledge transfer for lifelong robot learning. Advances in Neural Information Processing Systems."},{"key":"10210_CR32","doi-asserted-by":"crossref","unstructured":"Lynch, C., Sermanet, P. (2021).Language conditioned imitation learning over unstructured data. In: Robotics: Science and Systems.","DOI":"10.15607\/RSS.2021.XVII.047"},{"key":"10210_CR33","unstructured":"Mandi, Z., Bharadhwaj, H., Moens, V., Song, S., Rajeswaran, A., & Kumar, V.(2022). CACTI: A framework for scalable multi-task multi-scene visual imitation learning. In: CoRL 2022 Workshop on Pre-Training Robot Learning."},{"key":"10210_CR34","doi-asserted-by":"crossref","unstructured":"Mandlekar, A., Xu, D., Mart\u00edn-Mart\u00edn, R., Savarese, S., & Fei-Fei, L.(2020). Learning to generalize across long-horizon tasks from human demonstrations. arXiv preprint arXiv:2003.06085","DOI":"10.15607\/RSS.2020.XVI.061"},{"key":"10210_CR35","doi-asserted-by":"crossref","unstructured":"Mehta, S.A., Ciftci, Y.U., Ramachandran, B., Bansal, S., Losey, D.P.(2025). Stable-BC: Controlling covariate shift with stable behavior cloning. IEEE Robotics and Automation Letters .","DOI":"10.1109\/LRA.2025.3526439"},{"key":"10210_CR36","doi-asserted-by":"crossref","unstructured":"Mehta, S.A., Losey, D.P.(2024). Unified learning from demonstrations, corrections, and preferences during physical human\u2013robot interaction. ACM Transactions on Human-Robot Interaction 13(3).","DOI":"10.1145\/3623384"},{"key":"10210_CR37","doi-asserted-by":"crossref","unstructured":"Mehta, S.A., Parekh, S., & Losey, D.P.(2022). Learning latent actions without human demonstrations. In: IEEE International Conference on Robotics and Automation, pp. 7437\u20137443.","DOI":"10.1109\/ICRA46639.2022.9812230"},{"key":"10210_CR38","unstructured":"Mirjalili, R., J\u00fclg, T., Walter, F., & Burgard, W. (2025). Augmented reality for robots (arro): Pointing visuomotor policies towards visual robustness. arXiv preprint arXiv:2505.08627"},{"key":"10210_CR39","doi-asserted-by":"crossref","unstructured":"Nguyen, K., Dey, D., Brockett, C., & Dolan, B. (2019). Vision-based navigation with language-based assistance via imitation learning with indirect intervention. In: IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 12527\u201312537.","DOI":"10.1109\/CVPR.2019.01281"},{"issue":"1\u20132","key":"10210_CR40","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1561\/2300000053","volume":"7","author":"T Osa","year":"2018","unstructured":"Osa, T., Pajarinen, J., Neumann, G., Bagnell, J. A., Abbeel, P., & Peters, J. (2018). An algorithmic perspective on imitation learning. Foundations and Trends in Robotics, 7(1\u20132), 1\u2013179.","journal-title":"Foundations and Trends in Robotics"},{"key":"10210_CR41","unstructured":"O\u2019Neill, A., Rehman, A., Maddukuri, A., Gupta, A., Padalkar, A., Lee, A., Pooley, A., Gupta, A., Mandlekar, A., Jain, A., & Tung, A.(2024). Open X-Embodiment: Robotic learning datasets and RT-X models. In: IEEE International Conference on Robotics and Automation, pp. 6892\u20136903."},{"key":"10210_CR42","doi-asserted-by":"crossref","unstructured":"Pari, J., Shafiullah, N.M.M., Arunachalam, S.P., & Pinto, L. (2022). The surprising effectiveness of representation learning for visual imitation. In: Robotics: Science and Systems.","DOI":"10.15607\/RSS.2022.XVIII.010"},{"key":"10210_CR43","unstructured":"Pomerleau, D.A. (1988).Alvinn: An autonomous land vehicle in a neural network. Advances in Neural Information Processing Systems 1."},{"issue":"1","key":"10210_CR44","doi-asserted-by":"publisher","first-page":"297","DOI":"10.1146\/annurev-control-100819-063206","volume":"3","author":"H Ravichandar","year":"2020","unstructured":"Ravichandar, H., Polydoros, A. S., Chernova, S., & Billard, A. (2020). Recent advances in robot learning from demonstration. Annual Review of Control, Robotics, and Autonomous Systems, 3(1), 297\u2013330.","journal-title":"Annual Review of Control, Robotics, and Autonomous Systems"},{"key":"10210_CR45","doi-asserted-by":"crossref","unstructured":"Rolinek, M., Zietlow, D., & Martius, G.(2019). Variational autoencoders pursue PCA directions (by accident). In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 12406\u201312415.","DOI":"10.1109\/CVPR.2019.01269"},{"key":"10210_CR46","unstructured":"Ross, S., Gordon, G., & Bagnell, D.(2011). A reduction of imitation learning and structured prediction to no-regret online learning. In: International Conference on Artificial Intelligence and Statistics."},{"key":"10210_CR47","unstructured":"Shafiullah, N.M.M., Rai, A., Etukuru, H., Liu, Y., Misra, I., Chintala, S., & Pinto, L.(2023). On bringing robots home. arXiv preprint arXiv:2311.16098"},{"issue":"3","key":"10210_CR48","doi-asserted-by":"publisher","first-page":"604","DOI":"10.1109\/JPROC.2011.2179772","volume":"100","author":"D Shah","year":"2012","unstructured":"Shah, D., Schneider, J., & Campbell, M. (2012). A sketch interface for robust and natural robot control. Proceedings of the IEEE, 100(3), 604\u2013622.","journal-title":"Proceedings of the IEEE"},{"issue":"4","key":"10210_CR49","doi-asserted-by":"publisher","first-page":"519","DOI":"10.7210\/jrsj.22.519","volume":"22","author":"NE Sian","year":"2004","unstructured":"Sian, N. E., Yokoi, K., Kajita, S., Kanehiro, F., & Tanie, K. (2004). Whole body teleoperation of a humanoid robot development of a simple master device using joysticks. Journal of the Robotics Society of Japan, 22(4), 519\u2013527.","journal-title":"Journal of the Robotics Society of Japan"},{"issue":"3","key":"10210_CR50","doi-asserted-by":"publisher","first-page":"4978","DOI":"10.1109\/LRA.2020.3004787","volume":"5","author":"S Song","year":"2020","unstructured":"Song, S., Zeng, A., Lee, J., & Funkhouser, T. (2020). Grasping in the wild: Learning 6DOF closed-loop grasping from low-cost demonstrations. IEEE Robotics and Automation Letters, 5(3), 4978\u20134985.","journal-title":"IEEE Robotics and Automation Letters"},{"key":"10210_CR51","unstructured":"Stepputtis, S., Campbell, J., Phielipp, M., Lee, S., Baral, C., & Ben\u00a0Amor, H.(2020). Language-conditioned imitation learning for robot manipulation tasks. Advances in Neural Information Processing Systems pp. 13139\u201313150."},{"key":"10210_CR52","unstructured":"Sundaresan, P., Vuong, Q., Gu, J., Xu, P., Xiao, T., Kirmani, S., Yu, T., Stark, M., Jain, A., Hausman, K., Sadigh, D., Bohg, J., & Schaal, S.(2024). RT-sketch: Goal-conditioned imitation learning from hand-drawn sketches. In: Conference on Robot Learning."},{"key":"10210_CR53","doi-asserted-by":"crossref","unstructured":"Tanada, K., Iwanaga, Y., Tsuchinaga, M., Nakamura, Y., Mori, T., Sakai, R., & Yamamoto, T.(2024). Sketch-MoMa: Teleoperation for mobile manipulator via interpretation of hand-drawn sketches. arXiv preprint arXiv:2412.19153","DOI":"10.1109\/ICRA55743.2025.11128261"},{"key":"10210_CR54","unstructured":"Wang, C., Fan, L., Sun, J., Zhang, R., Fei-Fei, L., Xu, D., Zhu, Y., & Anandkumar, A.: Mimicplay: Long-horizon imitation learning by watching human play. In: 7th Annual Conference on Robot Learning."},{"key":"10210_CR55","doi-asserted-by":"crossref","unstructured":"Wu, P., Shentu, Y., Yi, Z., Lin, X., & Abbeel, P.(2024). GELLO: A general, low-cost, and intuitive teleoperation framework for robot manipulators. In: IEEE\/RSJ International Conference on Intelligent Robots and Systems, pp. 12156\u201312163.","DOI":"10.1109\/IROS58592.2024.10801581"},{"key":"10210_CR56","unstructured":"Young, S., Gandhi, D., Tulsiani, S., Gupta, A., Abbeel, P., & Pinto, L. (2021). Visual imitation made easy. In: Conference on Robot Learning, pp. 1992\u20132005."},{"key":"10210_CR57","doi-asserted-by":"crossref","unstructured":"Yu, P., Bhaskar, A., Singh, A., Mahammad, Z., & Tokekar, P.(2025). Sketch-to-Skill: Bootstrapping robot learning with human drawn trajectory sketches. arXiv preprint arXiv:2503.11918.","DOI":"10.15607\/RSS.2025.XXI.151"},{"key":"10210_CR58","doi-asserted-by":"crossref","unstructured":"Yu, T., Xiao, T., Stone, A., Tompson, J., Brohan, A., Wang, S., Singh, J., Tan, C., Peralta, J., Ichter, B., & Hausman, K.(2023). Scaling robot learning with semantically imagined experience .","DOI":"10.15607\/RSS.2023.XIX.027"},{"key":"10210_CR59","doi-asserted-by":"crossref","unstructured":"Zhang, T., McCarthy, Z., Jow, O., Lee, D., Chen, X., Goldberg, K., & Abbeel, P. (2018). Deep imitation learning for complex manipulation tasks from virtual reality teleoperation. In: IEEE International Conference on Robotics and Automation, pp. 5628\u20135635.","DOI":"10.1109\/ICRA.2018.8461249"},{"key":"10210_CR60","doi-asserted-by":"crossref","unstructured":"Zhang, X., Chang, M., Kumar, P., & Gupta, S.(2024). Diffusion meets DAgger: Supercharging eye-in-hand imitation learning. In: Robotics: Science and Systems.","DOI":"10.15607\/RSS.2024.XX.048"},{"key":"10210_CR61","unstructured":"Zhao, X., Ding, W., An, Y., Du, Y., Yu, T., Li, M., Tang, M., & Wang, J.(2023). Fast segment anything. arXiv preprint arXiv:2306.12156"},{"issue":"4","key":"10210_CR62","first-page":"1","volume":"31","author":"Y Zheng","year":"2012","unstructured":"Zheng, Y., Chen, X., Cheng, M. M., Zhou, K., Hu, S. M., & Mitra, N. J. (2012). Interactive images Cuboid proxies for smart image manipulation. ACM Transactions on Graphics, 31(4), 1\u201311.","journal-title":"ACM Transactions on Graphics"},{"key":"10210_CR63","doi-asserted-by":"crossref","unstructured":"Zhi, W., Zhang, T., & Johnson-Roberson, M.(2024). Instructing robots by sketching: Learning from demonstration via probabilistic diagrammatic teaching. In: IEEE International Conference on Robotics and Automation, pp. 15047\u201315053.","DOI":"10.1109\/ICRA57147.2024.10611349"},{"key":"10210_CR64","doi-asserted-by":"crossref","unstructured":"Zhou, X., Girdhar, R., Joulin, A., Kr\u00e4henb\u00fchl, P., & Misra, I.(2022). Detecting twenty-thousand classes using image-level supervision. In: European Conference on Computer Vision, pp. 350\u2013368.","DOI":"10.1007\/978-3-031-20077-9_21"},{"issue":"2","key":"10210_CR65","doi-asserted-by":"publisher","first-page":"4063","DOI":"10.1109\/LRA.2022.3150013","volume":"7","author":"J Zhu","year":"2022","unstructured":"Zhu, J., Gienger, M., & Kober, J. (2022). Learning task-parameterized skills from few demonstrations. IEEE Robotics and Automation Letters, 7(2), 4063\u20134070.","journal-title":"IEEE Robotics and Automation Letters"},{"key":"10210_CR66","unstructured":"Zhu, Y., Joshi, A., Stone, P., & Zhu, Y.(2023). Viola: Imitation learning for vision-based manipulation with object proposal priors. In: Conference on Robot Learning, pp. 1199\u20131210. PMLR."}],"container-title":["Autonomous Robots"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10514-025-10210-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10514-025-10210-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10514-025-10210-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,27]],"date-time":"2025-09-27T11:30:34Z","timestamp":1758972634000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10514-025-10210-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,9]]},"references-count":66,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,9]]}},"alternative-id":["10210"],"URL":"https:\/\/doi.org\/10.1007\/s10514-025-10210-x","relation":{},"ISSN":["0929-5593","1573-7527"],"issn-type":[{"value":"0929-5593","type":"print"},{"value":"1573-7527","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,9]]},"assertion":[{"value":"17 May 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 July 2025","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 August 2025","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 September 2025","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no Conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"All physical experiments that relied on interactions with humans were conducted under university guidelines and followed the protocol of Virginia Tech IRB -1237.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical approval"}}],"article-number":"25"}}