{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T18:36:31Z","timestamp":1775068591217,"version":"3.50.1"},"reference-count":298,"publisher":"Emerald","issue":"1-2","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2022,11,22]]},"abstract":"<jats:p>Interactive Imitation Learning (IIL) is a branch of Imitation Learning (IL) where human feedback is provided intermittently during robot execution allowing an online improvement of the robot\u2019s behavior.<\/jats:p>\n                  <jats:p>In recent years, IIL has increasingly started to carve out its own space as a promising data-driven alternative for solving complex robotic tasks. The advantages of IIL are twofold, 1) it is data-efficient, as the human feedback guides the robot directly towards an improved behavior (in contrast with Reinforcement Learning (RL), where behaviors must be discovered by trial and error), and 2) it is robust, as the distribution mismatch between the teacher and learner trajectories is minimized by providing feedback directly over the learner\u2019s trajectories (as opposed to offline IL methods such as Behavioral Cloning).<\/jats:p>\n                  <jats:p>Nevertheless, despite the opportunities that IIL presents, its terminology, structure, and applicability are not clear nor unified in the literature, slowing down its development and, therefore, the research of innovative formulations and discoveries.<\/jats:p>\n                  <jats:p>In this work, we attempt to facilitate research in IIL and lower entry barriers for new practitioners by providing a survey of the field that unifies and structures it. In addition, we aim to raise awareness of its potential, what has been accomplished and what are still open research questions.<\/jats:p>\n                  <jats:p>We organize the most relevant works in IIL in terms of human-robot interaction (i.e., types of feedback), interfaces (i.e., means of providing feedback), learning (i.e., models learned from feedback and function approximators), user experience (i.e., human perception about the learning process), applications, and benchmarks. Furthermore, we analyze similarities and differences between IIL and RL, providing a discussion on how the concepts offline, online, off-policy and on-policy learning should be transferred to IIL from the RL literature.<\/jats:p>\n                  <jats:p>We particularly focus on robotic applications in the real world and discuss their implications, limitations, and promising future areas of research.<\/jats:p>","DOI":"10.1561\/2300000072","type":"journal-article","created":{"date-parts":[[2022,11,22]],"date-time":"2022-11-22T08:54:03Z","timestamp":1669107243000},"page":"1-197","source":"Crossref","is-referenced-by-count":49,"title":["Interactive Imitation Learning in Robotics: A Survey"],"prefix":"10.1561","volume":"10","author":[{"given":"Carlos","family":"Celemin","sequence":"first","affiliation":[{"name":"Delft University of Technology ,","place":["The Netherlands"]}]},{"given":"Rodrigo","family":"P\u00e9rez-Dattari","sequence":"additional","affiliation":[{"name":"Delft University of Technology ,","place":["The Netherlands"]}]},{"given":"Eugenio","family":"Chisari","sequence":"additional","affiliation":[{"name":"University of Freiburg ,","place":["Germany"]}]},{"given":"Giovanni","family":"Franzese","sequence":"additional","affiliation":[{"name":"Delft University of Technology ,","place":["The Netherlands"]}]},{"given":"Leandro","family":"de Souza Rosa","sequence":"additional","affiliation":[{"name":"Delft University of Technology ,","place":["The Netherlands"]}]},{"given":"Ravi","family":"Prakash","sequence":"additional","affiliation":[{"name":"Delft University of Technology ,","place":["The Netherlands"]}]},{"given":"Zlatan","family":"Ajanovi\u0107","sequence":"additional","affiliation":[{"name":"Delft University of Technology ,","place":["The Netherlands"]}]},{"given":"Marta","family":"Ferraz","sequence":"additional","affiliation":[{"name":"Delft University of Technology ,","place":["The Netherlands"]}]},{"given":"Abhinav","family":"Valada","sequence":"additional","affiliation":[{"name":"University of Freiburg ,","place":["Germany"]}]},{"given":"Jens","family":"Kober","sequence":"additional","affiliation":[{"name":"Delft University of Technology ,","place":["The Netherlands"]}]}],"member":"140","published-online":{"date-parts":[[2022,11,22]]},"reference":[{"key":"2026040113120620700_ref001","first-page":"1","article-title":"Human performance measures: mathematics","author":"Abdel-Malek","year":"2005","journal-title":"Department of Mechanical Engineering The University of Iowa, Technical report"},{"key":"2026040113120620700_ref002","article-title":"Fighting Failures with FIRE: Failure Identification to Reduce Expert Burden in Intervention Based Learning","author":"Ablett","year":"2020"},{"key":"2026040113120620700_ref003","doi-asserted-by":"crossref","first-page":"116","DOI":"10.1007\/978-3-642-33486-3_8","volume-title":"Machine Learning and Knowledge Discovery in Databases","author":"Akrour","year":"2012"},{"key":"2026040113120620700_ref004","first-page":"12","article-title":"Preference-Based Policy Learning","volume-title":"Proceedings of the 2011 European Conference on Machine Learning and Knowledge Discovery in Databases Volume Part I","author":"Akrour","year":"2011"},{"key":"2026040113120620700_ref005","first-page":"1503","article-title":"Programming by Feedback","volume-title":"International Conference on Machine Learning","author":"Akrour","year":"2014"},{"key":"2026040113120620700_ref006","doi-asserted-by":"crossref","DOI":"10.1609\/aaai.v32i1.11797","article-title":"Safe Reinforcement Learning via Shielding","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence","author":"Alshiekh","year":"2018"},{"key":"2026040113120620700_ref007","doi-asserted-by":"crossref","first-page":"313","DOI":"10.1016\/j.cogsys.2016.06.002","article-title":"Metrics and bench marks in human-robot interaction: Recent advances in cognitive robotics","volume":"43","author":"Aly","year":"2017","journal-title":"Cognitive Systems Research"},{"issue":"4","key":"2026040113120620700_ref008","doi-asserted-by":"crossref","first-page":"105","DOI":"10.1609\/aimag.v35i4.2513","article-title":"Power to the people: The role of humans in interactive machine learning","volume":"35","author":"Amershi","year":"2014","journal-title":"AI Magazine"},{"key":"2026040113120620700_ref009","doi-asserted-by":"crossref","first-page":"21","DOI":"10.1145\/2207676.2207680","article-title":"Regroup: Interactive machine learning for on-demand group creation in social networks","volume-title":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","author":"Amershi","year":"2012"},{"key":"2026040113120620700_ref010","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.1606.06565","article-title":"Concrete Problems in AI Safety","author":"Amodei","year":"2016"},{"key":"2026040113120620700_ref011","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.1810.11748","article-title":"DQN-TAMER: Human-in-the-Loop Reinforcement Learning with Intractable Feedback","author":"Arakawa","year":"2018"},{"key":"2026040113120620700_ref012","doi-asserted-by":"crossref","unstructured":"Argall, B. D.. (2009). \u201cLearning mobile robot motion control from demonstration and corrective feedback\u201d. PhD thesis. Carnegie Mel lon University.","DOI":"10.1007\/978-3-642-05181-4_18"},{"issue":"5","key":"2026040113120620700_ref013","doi-asserted-by":"crossref","first-page":"469","DOI":"10.1016\/j.robot.2008.10.024","article-title":"A sur vey of robot learning from demonstration","volume":"57","author":"Argall","year":"2009","journal-title":"Robotics and autonomous systems"},{"key":"2026040113120620700_ref014","doi-asserted-by":"publisher","first-page":"399","DOI":"10.1109\/IROS.2008.4651020","article-title":"Learning robot motion control with demonstration and advice-operators","volume-title":"2008 IEEE\/RSJ International Conference on Intelligent Robots and Sys tems","author":"Argall","year":"2008"},{"issue":"3","key":"2026040113120620700_ref015","doi-asserted-by":"crossref","first-page":"243","DOI":"10.1016\/j.robot.2010.11.004","article-title":"Teacher feedback to scaffold and refine demonstrated motion primitives on a mobile robot","volume":"59","author":"Argall","year":"2011","journal-title":"Robotics and Autonomous Systems"},{"issue":"2","key":"2026040113120620700_ref016","doi-asserted-by":"publisher","first-page":"79","DOI":"10.1561\/2300000012","article-title":"Tactile Guidance for Policy Adaptation","volume":"1","author":"Argall","year":"2011","journal-title":"Foundations and Trends\u00ae in Robotics"},{"key":"2026040113120620700_ref017","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.1902.04257","article-title":"Deep Reinforcement Learning from Policy-Dependent Human Feedback","author":"Arumugam","year":"2019"},{"key":"2026040113120620700_ref018","doi-asserted-by":"crossref","first-page":"1195","DOI":"10.1145\/3357236.3395525","article-title":"A Survey on Interactive Reinforcement Learning: Design Principles and Open Challenges","volume-title":"Proceedings of the 2020 ACM Designing Interactive Systems Conference","author":"Arzate Cruz","year":"2020"},{"key":"2026040113120620700_ref019","first-page":"103","article-title":"A Framework for Behavioural Cloning","author":"Bain","year":"1995","journal-title":"Machine Intelligence 15"},{"key":"2026040113120620700_ref020","doi-asserted-by":"crossref","first-page":"141","DOI":"10.1145\/3171221.3171267","article-title":"Learning from physical human corrections, one feature at a time","volume-title":"Proceedings of the 2018 ACM\/IEEE International Conference on Human-Robot Interaction","author":"Bajcsy","year":"2018"},{"key":"2026040113120620700_ref021","first-page":"217","article-title":"Learning robot objectives from physical human interaction","volume-title":"Conference on Robot Learning","author":"Bajcsy","year":"2017"},{"key":"2026040113120620700_ref022","first-page":"24","article-title":"On-policy robot imitation learning from a converging supervisor","volume-title":"Conference on Robot Learning","author":"Balakrishna","year":"2020"},{"issue":"4","key":"2026040113120620700_ref023","doi-asserted-by":"crossref","first-page":"305","DOI":"10.1007\/s00799-015-0156-0","article-title":"Paper rec ommender systems: a literature survey","volume":"17","author":"Beel","year":"2016","journal-title":"International Journal on Digital Libraries"},{"key":"2026040113120620700_ref024","article-title":"Robot competitions-ideal benchmarks for robotics research","volume-title":"Proc. of IROS-2006 Workshop on Benchmarks in Robotics Research","author":"Behnke","year":"2006"},{"key":"2026040113120620700_ref025","first-page":"679","article-title":"A Markovian decision process","author":"Bellman","year":"1957","journal-title":"Journal of Mathe matics and Mechanics"},{"key":"2026040113120620700_ref026","first-page":"492","article-title":"Kinesthetic Bootstrapping: Teaching Motor Skills to Humanoid Robots through Physical Interaction","author":"Ben Amor","year":"2009","journal-title":"KI 2009: Advances in Artificial Intelligence"},{"key":"2026040113120620700_ref027","volume-title":"Handbook of robotics","author":"Billard","year":"2008"},{"issue":"12","key":"2026040113120620700_ref028","doi-asserted-by":"crossref","first-page":"3824","DOI":"10.4249\/scholarpedia.3824","article-title":"Robot learning by demonstration","volume":"8","author":"Billard","year":"2013","journal-title":"Scholarpedia"},{"key":"2026040113120620700_ref029","doi-asserted-by":"crossref","first-page":"1995","DOI":"10.1007\/978-3-319-32552-1_74","volume-title":"Springer handbook of robotics","author":"Billard","year":"2016"},{"issue":"1","key":"2026040113120620700_ref030","doi-asserted-by":"crossref","first-page":"1","DOI":"10.2478\/s13230-010-0001-5","article-title":"A formalism for learning from demonstration","volume":"1","author":"Billing","year":"2010","journal-title":"Paladyn, Journal of Behavioral Robotics"},{"key":"2026040113120620700_ref031","volume-title":"Machine learning","author":"Bishop","year":"2006"},{"key":"2026040113120620700_ref032","first-page":"1177","article-title":"Asking Easy Questions: A User-Friendly Approach to Active Reward Learning","volume-title":"Proceedings of the Conference on Robot Learning","author":"Biyik","year":"2020"},{"key":"2026040113120620700_ref033","first-page":"519","article-title":"Batch Active Preference-Based Learn ing of Reward Functions","volume-title":"Proceedings of The 2nd Conference on Robot Learning","author":"Biyik","year":"2018"},{"key":"2026040113120620700_ref034","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2005.02575","article-title":"Active Preference-Based Gaussian Process Regression for Reward Learning","author":"B\u0131y\u0131k","year":"2020"},{"key":"2026040113120620700_ref035","article-title":"Following high-level navigation instructions on a simulated quad copter with imitation learning","author":"Blukis","year":"2018"},{"key":"2026040113120620700_ref036","doi-asserted-by":"publisher","first-page":"417","DOI":"10.1145\/566570.566597","article-title":"Integrated Learning for Interactive Synthetic Characters","volume-title":"Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques","author":"Blumberg","year":"2002"},{"key":"2026040113120620700_ref037","doi-asserted-by":"crossref","first-page":"109","DOI":"10.1016\/j.knosys.2013.03.012","article-title":"Rec ommender systems survey","volume":"46","author":"Bobadilla","year":"2013","journal-title":"Knowledge-based systems"},{"key":"2026040113120620700_ref038","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2205.07882","article-title":"Aligning Robot Representations with Humans","author":"Bobu","year":"2022"},{"key":"2026040113120620700_ref039","doi-asserted-by":"publisher","first-page":"216","DOI":"10.1145\/3434073.3444667","article-title":"Fea ture Expansive Reward Learning: Rethinking Human Input","volume-title":"Proceedings of the 2021 ACM\/IEEE International Conference on Human-Robot Interaction","author":"Bobu","year":"2021"},{"issue":"0","key":"2026040113120620700_ref040","doi-asserted-by":"publisher","first-page":"02783649221078031","DOI":"10.1177\/02783649221078031","article-title":"Inducing Structure in Reward Learning by Learning Features","volume":"0","author":"Bobu","year":"2022","journal-title":"The Interna tional Journal of Robotics Research"},{"issue":"4","key":"2026040113120620700_ref041","doi-asserted-by":"crossref","first-page":"353","DOI":"10.1007\/s13218-015-0356-1","article-title":"Autonomous learning of state representations for control: An emerging field aims to autonomously learn state representations for reinforcement learning agents from their real world sensor observations","volume":"29","author":"B\u00f6hmer","year":"2015","journal-title":"KI-K\u00fcnstliche Intelligenz"},{"key":"2026040113120620700_ref042","first-page":"665","article-title":"Interactive learning of sensor policy fusion","volume-title":"2021 30th IEEE International Conference on Robot&Human Interactive Communication (RO-MAN)","author":"Bootsma","year":"2021"},{"key":"2026040113120620700_ref043","first-page":"747","article-title":"Accounting for Variance in Machine Learning Benchmarks","volume":"3","author":"Bouthillier","year":"2021","journal-title":"Proceedings of Machine Learning and Systems"},{"key":"2026040113120620700_ref044","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.1606.01540","article-title":"OpenAI Gym","author":"Brockman","year":"2016"},{"key":"2026040113120620700_ref045","first-page":"362","article-title":"Risk-aware active inverse reinforcement learning","volume-title":"Conference on Robot Learning","author":"Brown","year":"2018"},{"key":"2026040113120620700_ref046","article-title":"Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences","volume-title":"Proceedings of the 37th International Conference on Machine Learning. ICML\u201920","author":"Brown","year":"2020"},{"issue":"4","key":"2026040113120620700_ref047","doi-asserted-by":"crossref","first-page":"331","DOI":"10.1023\/A:1021240730564","article-title":"Hybrid recommender systems: Survey and exper iments","volume":"12","author":"Burke","year":"2002","journal-title":"User modeling and user-adapted interaction"},{"key":"2026040113120620700_ref048","doi-asserted-by":"crossref","first-page":"17","DOI":"10.1145\/2157689.2157693","article-title":"Designing robot learners that ask good questions","volume-title":"Proceedings of the seventh annual ACM\/IEEE international conference on Human-Robot Interaction","author":"Cakmak","year":"2012"},{"key":"2026040113120620700_ref049","first-page":"1","article-title":"Learning from demonstration (programming by demonstration)","author":"Calinon","year":"2018","journal-title":"Encyclopedia of Robotics"},{"key":"2026040113120620700_ref050","doi-asserted-by":"crossref","first-page":"22","DOI":"10.1007\/978-3-319-47437-3_3","volume-title":"Social Robotics","author":"Canal","year":"2016"},{"key":"2026040113120620700_ref051","doi-asserted-by":"publisher","first-page":"3273","DOI":"10.1109\/ICRA.2018.8460606","article-title":"Join ing high-level symbolic planning with low-level motion primitives in adaptive HRI: application to dressing assistance","volume-title":"IEEE International Conference on Robotics and Automation (ICRA)","author":"Canal","year":"2018"},{"issue":"4","key":"2026040113120620700_ref052","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3472208","article-title":"Are Preferences Useful for Better Assistance?: A Physically Assistive Robotics User Study","volume":"10","author":"Canal","year":"2021","journal-title":"ACM Transactions on Human-Robot Interaction (THRI)"},{"key":"2026040113120620700_ref053","first-page":"3366","article-title":"Policy Shaping with Human Teachers.","author":"Cederborg","year":"2015","journal-title":"IJCAI"},{"issue":"14","key":"2026040113120620700_ref054","doi-asserted-by":"crossref","first-page":"1560","DOI":"10.1177\/0278364919871998","article-title":"Reinforcement learning of motor skills using policy search and human corrective advice","volume":"38","author":"Celemin","year":"2019","journal-title":"The International Journal of Robotics Research"},{"key":"2026040113120620700_ref055","doi-asserted-by":"publisher","first-page":"581","DOI":"10.1109\/ICAR.2015.7251514","article-title":"COACH: Learning continu ous actions from COrrective Advice Communicated by Humans","volume-title":"2015 International Conference on Advanced Robotics (ICAR)","author":"Celemin","year":"2015"},{"issue":"1","key":"2026040113120620700_ref056","doi-asserted-by":"publisher","first-page":"77","DOI":"10.1007\/s10846-018-0839-z","article-title":"An interactive framework for learning continuous actions policies based on corrective feedback","volume":"95","author":"Celemin","year":"2019","journal-title":"Journal of Intelligent&Robotic Systems"},{"issue":"5","key":"2026040113120620700_ref057","doi-asserted-by":"crossref","first-page":"1173","DOI":"10.1007\/s10514-018-9786-6","article-title":"A fast hybrid reinforcement learning framework with human corrective feedback","volume":"43","author":"Celemin","year":"2019","journal-title":"Autonomous Robots"},{"key":"2026040113120620700_ref058","first-page":"2058","article-title":"Learning to search better than your teacher","volume-title":"International Conference on Machine Learning","author":"Chang","year":"2015"},{"issue":"3","key":"2026040113120620700_ref059","doi-asserted-by":"crossref","first-page":"207","DOI":"10.1177\/1473871620904671","article-title":"A survey of surveys on the use of visualization for interpreting machine learning models","volume":"19","author":"Chatzimparmpas","year":"2020","journal-title":"Information Visualization"},{"key":"2026040113120620700_ref060","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.1805.10413","article-title":"Fast Policy Learning through Imitation and Reinforcement","author":"Cheng","year":"2018"},{"issue":"3","key":"2026040113120620700_ref061","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1007\/978-3-031-01570-0","article-title":"Robot learning from human teachers","volume":"8","author":"Chernova","year":"2014","journal-title":"Synthesis Lectures on Artificial Intelligence and Machine Learning"},{"issue":"1","key":"2026040113120620700_ref062","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1613\/jair.2584","article-title":"Interactive policy learning through confidence-based autonomy","volume":"34","author":"Chernova","year":"2009","journal-title":"Journal of Artificial Intelligence Research"},{"issue":"2","key":"2026040113120620700_ref063","doi-asserted-by":"crossref","first-page":"3695","DOI":"10.1109\/LRA.2022.3145516","article-title":"Correct me if i am wrong: Interactive learning for robotic manipulation","volume":"7","author":"Chisari","year":"2022","journal-title":"IEEE Robotics and Automation Letters"},{"key":"2026040113120620700_ref064","volume-title":"Advances in Neural Information Processing Systems","author":"Christiano","year":"2017"},{"key":"2026040113120620700_ref065","doi-asserted-by":"publisher","first-page":"119","DOI":"10.1109\/HAPTICS.2016.7463165","article-title":"Learning haptic af fordances from demonstration and human-guided exploration","author":"Chu","year":"2016","journal-title":"2016 IEEE Haptics Symposium (HAPTICS)"},{"key":"2026040113120620700_ref066","article-title":"Exploring Affordances Using Human-Guidance and Self-Exploration","author":"Chu","year":"2015","journal-title":"2015 AAAI Fall Sym posium Series"},{"key":"2026040113120620700_ref067","doi-asserted-by":"crossref","first-page":"92","DOI":"10.1016\/B978-1-55860-247-2.50017-6","article-title":"A Teaching Method for Reinforce ment Learning","author":"Clouse","year":"1992","journal-title":"Machine Learning Proceedings 1992"},{"key":"2026040113120620700_ref068","doi-asserted-by":"crossref","first-page":"129","DOI":"10.1613\/jair.295","article-title":"Active learning with statistical models","volume":"4","author":"Cohn","year":"1996","journal-title":"Journal of artificial intelligence research"},{"key":"2026040113120620700_ref069","doi-asserted-by":"publisher","first-page":"29","DOI":"10.1007\/978-3-319-31056-5_4","volume-title":"Toward Robotic Socially Believable Behaving Systems-Volume I : Modeling Emotions","author":"Corrigan","year":"2016"},{"key":"2026040113120620700_ref070","doi-asserted-by":"publisher","DOI":"10.1145\/2776880.2792704","volume-title":"ACM SIGGRAPH 2015 Courses. SIGGRAPH \u201915","author":"Coumans","year":"2015"},{"key":"2026040113120620700_ref071","article-title":"BAgger: A Bayesian algorithm for safe and query-efficient imitation learning","author":"Cronrath","year":"2018","journal-title":"Machine Learning in Robot Motion Planning IROS 2018 Workshop"},{"key":"2026040113120620700_ref072","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/IJCNN.2018.8489237","article-title":"Multi-modal Feedback for Affordance-driven Interactive Reinforcement Learning","volume-title":"2018 International Joint Conference on Neural Networks (IJCNN)","author":"Cruz","year":"2018"},{"key":"2026040113120620700_ref073","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/IJCNN.2015.7280477","article-title":"Interactive reinforcement learning through speech guidance in a domestic scenario","volume-title":"2015 International Joint Conference on Neural Networks (IJCNN)","author":"Cruz","year":"2015"},{"key":"2026040113120620700_ref074","first-page":"19","article-title":"Machine learning for interactive systems and robots: a brief introduction","author":"Cuay\u00e1huitl","year":"2013","journal-title":"Proceedings of the 2nd Workshop on Ma chine Learning for Interactive Systems: Bridging the Gap Between Perception, Action and Communication"},{"key":"2026040113120620700_ref075","doi-asserted-by":"crossref","first-page":"761","DOI":"10.1109\/ICRA.2019.8794025","article-title":"Uncertainty aware data aggregation for deep imitation learning","volume-title":"2019 In ternational Conference on Robotics and Automation (ICRA)","author":"Cui","year":"2019"},{"key":"2026040113120620700_ref076","article-title":"Understanding the Relationship between Interactions and Outcomes in Human-in-the-Loop Machine Learn ing","volume-title":"Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, Montreal, QC, Canada","author":"Cui","year":"2021"},{"key":"2026040113120620700_ref077","doi-asserted-by":"publisher","first-page":"6907","DOI":"10.1109\/ICRA.2018.8460854","article-title":"Active Reward Learning from Critiques","volume-title":"2018 IEEE International Conference on Robotics and Automation (ICRA)","author":"Cui","year":"2018"},{"issue":"1","key":"2026040113120620700_ref078","doi-asserted-by":"publisher","first-page":"43","DOI":"10.1093\/comjnl\/19.1.43","article-title":"A synthetic benchmark","volume":"19","author":"Curnow","year":"1976","journal-title":"The Computer Journal"},{"key":"2026040113120620700_ref079","first-page":"368","article-title":"Cellular encoding for interactive evolutionary robotics","volume-title":"Fourth European conference on artificial life","author":"CWI","year":"1997"},{"issue":"3","key":"2026040113120620700_ref080","doi-asserted-by":"publisher","first-page":"389","DOI":"10.1007\/s10514-015-9454-z","article-title":"Active Reward Learning with a Novel Acquisition Function","volume":"39","author":"Daniel","year":"2015","journal-title":"Au tonomous Robots"},{"key":"2026040113120620700_ref081","first-page":"885","article-title":"RoboNet: Large-Scale Multi Robot Learning","volume-title":"Proceedings of the Conference on Robot Learn ing","author":"Dasari","year":"2020"},{"key":"2026040113120620700_ref082","article-title":"Off-Policy Actor-Critic","volume-title":"International Conference on Machine Learning","author":"Degris","year":"2012"},{"issue":"1\u20132","key":"2026040113120620700_ref083","first-page":"1","article-title":"A survey on policy search for robotics","volume":"2","author":"Deisenroth","year":"2013","journal-title":"Foundations and Trends\u00ae in Robotics"},{"issue":"5","key":"2026040113120620700_ref084","doi-asserted-by":"crossref","first-page":"1141","DOI":"10.1109\/TRO.2018.2830407","article-title":"Toward dexterous manipulation with augmented adaptive synergies: The pisa\/iit softhand 2","volume":"34","author":"Della Santina","year":"2018","journal-title":"IEEE Transactions on Robotics"},{"key":"2026040113120620700_ref085","doi-asserted-by":"crossref","first-page":"10226","DOI":"10.1109\/ICRA40945.2020.9196754","article-title":"Helping robots learn: a human-robot master apprentice model using demonstrations via virtual reality teleoper ation","volume-title":"2020 IEEE International Conference on Robotics and Automation (ICRA)","author":"DelPreto","year":"2020"},{"key":"2026040113120620700_ref086","first-page":"1","article-title":"CARLA: An open urban driving simulator","volume-title":"Conference on robot learning","author":"Dosovitskiy","year":"2017"},{"issue":"2","key":"2026040113120620700_ref087","doi-asserted-by":"publisher","DOI":"10.1145\/3185517","article-title":"A Review of User Interface Design for Interactive Machine Learning","volume":"8","author":"Dudley","year":"2018","journal-title":"ACM Trans. Interact. Intell. Syst."},{"key":"2026040113120620700_ref088","doi-asserted-by":"publisher","first-page":"351","DOI":"10.1109\/HUMANOIDS.2016.7803300","article-title":"Incremental imitation learning of context-dependent motor skills","volume-title":"2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids)","author":"Ewerton","year":"2016"},{"key":"2026040113120620700_ref089","doi-asserted-by":"crossref","first-page":"39","DOI":"10.1145\/604045.604056","article-title":"Interactive machine learning","volume-title":"Proceedings of the 8th international conference on Intelligent user interfaces","author":"Fails","year":"2003"},{"key":"2026040113120620700_ref090","first-page":"767","article-title":"SURREAL: Open-Source Re inforcement Learning Framework and Robot Manipulation Bench mark","volume-title":"Proceedings of The 2nd Conference on Robot Learning","author":"Fan","year":"2018"},{"key":"2026040113120620700_ref091","first-page":"275","article-title":"Multisensory real-time space telerobotics","volume-title":"Intelligent Computing-Proceedings of the Computing Conference","author":"Ferraz","year":"2019"},{"key":"2026040113120620700_ref092","first-page":"49","article-title":"Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization","volume-title":"Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48. ICML\u201916","author":"Finn","year":"2016"},{"key":"2026040113120620700_ref093","article-title":"Situated mapping for transfer learning","volume-title":"Fourth Annual Conference on Advances in Cognitive Systems","author":"Fitzgerald","year":"2016"},{"issue":"2","key":"2026040113120620700_ref094","doi-asserted-by":"publisher","DOI":"10.1145\/3277905","article-title":"Human-Guided Object Mapping for Task Transfer","volume":"7","author":"Fitzgerald","year":"2018","journal-title":"J. Hum.-Robot Interact."},{"issue":"3","key":"2026040113120620700_ref095","doi-asserted-by":"publisher","first-page":"218","DOI":"10.1145\/5666.5673","article-title":"How Not to Lie with Statistics: The Correct Way to Summarize Benchmark Results","volume":"29","author":"Fleming","year":"1986","journal-title":"Commun. ACM."},{"key":"2026040113120620700_ref096","first-page":"1298","article-title":"Learning Interactively to Resolve Ambiguity in Reference Frame Selection","volume-title":"Proceedings of the 2020 Conference on Robot Learning","author":"Franzese","year":"2021"},{"key":"2026040113120620700_ref097","doi-asserted-by":"publisher","first-page":"7778","DOI":"10.1109\/IROS51168.2021.9636710","article-title":"ILoSA: Interactive Learning of Stiffness and Attractors","author":"Franzese","year":"2021"},{"issue":"1-2","key":"2026040113120620700_ref098","doi-asserted-by":"crossref","first-page":"123","DOI":"10.1007\/s10994-012-5313-8","article-title":"Preference-based reinforcement learning: a formal framework and a policy iteration algorithm","volume":"89","author":"F\u00fcrnkranz","year":"2012","journal-title":"Machine learning"},{"issue":"1","key":"2026040113120620700_ref099","first-page":"1437","article-title":"A Comprehensive Survey on Safe Reinforcement Learning","volume":"16","author":"Garc\u0131a","year":"2015","journal-title":"Journal of Machine Learning Research"},{"key":"2026040113120620700_ref100","first-page":"1259","article-title":"A divergence mini mization perspective on imitation learning methods","author":"Ghasemipour","year":"2020","journal-title":"Conference on Robot Learning"},{"issue":"2","key":"2026040113120620700_ref101","first-page":"67","article-title":"The theory of affordances","volume":"1","author":"Gibson","year":"1977","journal-title":"Hilldale, USA"},{"key":"2026040113120620700_ref102","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33012462","article-title":"Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real Time","volume-title":"Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence. AAAI\u201919\/IAAI\u201919\/EAAI\u201919","author":"Goecks","year":"2019"},{"key":"2026040113120620700_ref103","volume-title":"Deep learning","author":"Goodfellow","year":"2016"},{"key":"2026040113120620700_ref104","first-page":"2625","article-title":"Policy Shaping: Integrating Human Feedback with Re inforcement Learning","volume-title":"Proceedings of the 26th International Conference on Neural Information Processing Systems-Volume 2. NIPS\u201913","author":"Griffith","year":"2013"},{"key":"2026040113120620700_ref105","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/DevLrn.2013.6652523","article-title":"Robot learning simultaneously a task and how to interpret human instructions","volume-title":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","author":"Grizou","year":"2013"},{"key":"2026040113120620700_ref106","doi-asserted-by":"crossref","first-page":"223","DOI":"10.2307\/3318737","article-title":"An adaptive Metropolis algorithm","author":"Haario","year":"2001","journal-title":"Bernoulli"},{"issue":"1","key":"2026040113120620700_ref107","doi-asserted-by":"crossref","first-page":"70","DOI":"10.1111\/j.0956-7976.2005.00782.x","article-title":"How many variables can humans process?","volume":"16","author":"Halford","year":"2005","journal-title":"Psychological science"},{"key":"2026040113120620700_ref108","first-page":"40","author":"Hammersley","year":"1964"},{"key":"2026040113120620700_ref109","volume-title":"Neural networks: a comprehensive foundation","author":"Haykin","year":"2001"},{"key":"2026040113120620700_ref110","first-page":"131","article-title":"Learning Behaviors with Un certain Human Feedback","volume-title":"Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI)","author":"He","year":"2020"},{"key":"2026040113120620700_ref111","doi-asserted-by":"publisher","first-page":"207","DOI":"10.1145\/3434073.3444664","article-title":"The Effects of a Robot\u2019s Performance on Human Teachers for Learning from Demonstration Tasks","volume-title":"Proceedings of the 2021 ACM\/IEEE International Conference on Human-Robot Interaction. HRI \u201921","author":"Hedlund","year":"2021"},{"key":"2026040113120620700_ref112","doi-asserted-by":"crossref","DOI":"10.1609\/aaai.v32i1.11757","article-title":"Deep q-learning from demonstrations","volume-title":"Thirty-second AAAI conference on artificial intelligence","author":"Hester","year":"2018"},{"key":"2026040113120620700_ref113","doi-asserted-by":"crossref","DOI":"10.1155\/2021\/6628320","article-title":"Inertial-based human motion capture: A technical summary of current processing methodologies for spatiotemporal and kinematic measures","volume":"2021","author":"Hindle","year":"2021","journal-title":"Applied Bionics and Biomechanics"},{"key":"2026040113120620700_ref114","article-title":"Teaching with rewards and punishments: Reinforcement or commu nication?","volume-title":"CogSci","author":"Ho","year":"2015"},{"issue":"2","key":"2026040113120620700_ref115","doi-asserted-by":"crossref","first-page":"119","DOI":"10.1007\/s40708-016-0042-6","article-title":"Interactive machine learning for health informat ics: when do we need the human-in-the-loop?","volume":"3","author":"Holzinger","year":"2016","journal-title":"Brain Informatics"},{"key":"2026040113120620700_ref116","first-page":"598","article-title":"ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning","volume-title":"Proceedings of the 5th Conference on Robot Learning","author":"Hoque","year":"2022"},{"key":"2026040113120620700_ref117","doi-asserted-by":"publisher","first-page":"502","DOI":"10.1109\/CASE49439.2021.9551469","article-title":"LazyDAgger: Reducing Context Switching in Interactive Imitation Learning","volume-title":"2021 IEEE 17th International Conference on Au tomation Science and Engineering (CASE)","author":"Hoque","year":"2021"},{"key":"2026040113120620700_ref118","article-title":"Dynamic programming and markov processes","author":"Howard","year":"1960"},{"issue":"7","key":"2026040113120620700_ref119","doi-asserted-by":"crossref","first-page":"833","DOI":"10.1177\/0278364919846363","article-title":"Kernel ized movement primitives","volume":"38","author":"Huang","year":"2019","journal-title":"The International Journal of Robotics Research"},{"key":"2026040113120620700_ref120","doi-asserted-by":"publisher","DOI":"10.3389\/frobt.2021.650325","article-title":"From Learning to Relearning: A Framework for Diminishing Bias in Social Robot Navigation","volume":"8","author":"Hurtado","year":"2021","journal-title":"Frontiers in Robotics and AI"},{"issue":"2","key":"2026040113120620700_ref121","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3054912","article-title":"Imitation learning: A survey of learning methods","volume":"50","author":"Hussein","year":"2017","journal-title":"ACM Computing Surveys (CSUR)"},{"key":"2026040113120620700_ref122","first-page":"8022","article-title":"Reward Learning from Human Preferences and Demonstrations in Atari","volume-title":"Proceedings of the 32nd International Conference on Neural Information Processing Systems. NIPS\u201918","author":"Ibarz","year":"2018"},{"key":"2026040113120620700_ref123","volume-title":"Advances in Neural Information Processing Systems","author":"Isbell","year":"2001"},{"issue":"10","key":"2026040113120620700_ref124","doi-asserted-by":"crossref","first-page":"1296","DOI":"10.1177\/0278364915581193","article-title":"Learning preferences for manipulation tasks from online coactive feedback","volume":"34","author":"Jain","year":"2015","journal-title":"The International Journal of Robotics Research"},{"key":"2026040113120620700_ref125","first-page":"575","article-title":"Learning Trajectory Preferences for Manipulators via Iterative Improvement","volume-title":"Proceedings of the 26th International Conference on Neural Information Processing Systems-Volume 1. NIPS\u201913","author":"Jain","year":"2013"},{"issue":"2","key":"2026040113120620700_ref126","doi-asserted-by":"publisher","first-page":"3019","DOI":"10.1109\/LRA.2020.2974707","article-title":"RLBench: The Robot Learning Benchmark amp; Learning Environment","volume":"5","author":"James","year":"2020","journal-title":"IEEE Robotics and Automation Letters"},{"key":"2026040113120620700_ref127","first-page":"991","article-title":"BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning","volume-title":"Proceedings of the 5th Confer ence on Robot Learning","author":"Jang","year":"2022"},{"key":"2026040113120620700_ref128","first-page":"682","article-title":"Interactive Imitation Learning in State-Space","volume-title":"Proceedings of the 2020 Conference on Robot Learning","author":"Jauhri","year":"2021"},{"issue":"2","key":"2026040113120620700_ref129","doi-asserted-by":"crossref","first-page":"401","DOI":"10.1007\/s12650-018-0531-1","article-title":"Recent research advances on interactive machine learning","volume":"22","author":"Jiang","year":"2019","journal-title":"Journal of Visualization"},{"issue":"1","key":"2026040113120620700_ref130","doi-asserted-by":"publisher","first-page":"99","DOI":"10.1016\/S0004-3702(98)00023-X","article-title":"Planning and acting in partially observable stochastic domains","volume":"101","author":"Kaelbling","year":"1998","journal-title":"Artificial Intelligence"},{"issue":"2","key":"2026040113120620700_ref131","doi-asserted-by":"publisher","first-page":"1872","DOI":"10.1109\/LRA.2021.3060404","article-title":"LaND: Learning to Navi gate From Disengagements","volume":"6","author":"Kahn","year":"2021","journal-title":"IEEE Robotics and Automation Letters"},{"key":"2026040113120620700_ref132","doi-asserted-by":"crossref","first-page":"4227","DOI":"10.1109\/ICSMC.1997.637363","article-title":"Control rule acqui sition for an arm wrestling robot","volume-title":"1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cy bernetics and Simulation","author":"Kamohara","year":"1997"},{"issue":"3","key":"2026040113120620700_ref133","doi-asserted-by":"crossref","first-page":"197","DOI":"10.1016\/S0921-8890(02)00168-9","article-title":"Robotic clicker training","volume":"38","author":"Kaplan","year":"2002","journal-title":"Robotics and Autonomous Systems"},{"key":"2026040113120620700_ref134","first-page":"313","article-title":"Imitation learning as f-divergence minimization","volume-title":"In ternational Workshop on the Algorithmic Foundations of Robotics","author":"Ke","year":"2020"},{"key":"2026040113120620700_ref135","doi-asserted-by":"publisher","first-page":"8077","DOI":"10.1109\/ICRA.2019.8793698","article-title":"HG-DAgger: Interactive Imitation Learning with Human Experts","volume-title":"2019 International Conference on Robotics and Au tomation (ICRA)","author":"Kelly","year":"2019"},{"key":"2026040113120620700_ref136","first-page":"5243","article-title":"What can I do here? A Theory of Affordances in Reinforce ment Learning","volume-title":"International Conference on Machine Learning","author":"Khetarpal","year":"2020"},{"issue":"2","key":"2026040113120620700_ref137","doi-asserted-by":"publisher","first-page":"883","DOI":"10.1109\/LRA.2020.2965869","article-title":"Benchmarking Protocols for Evaluating Small Parts Robotic Assembly Systems","volume":"5","author":"Kimble","year":"2020","journal-title":"IEEE Robotics and Automation Letters"},{"key":"2026040113120620700_ref138","article-title":"Learning from feed back on actions past and intended","volume-title":"In Proceedings of 7th ACM\/IEEE International Conference on Human-Robot Interaction, Late-Breaking Reports Session (HRI 2012)","author":"Knox","year":"2012"},{"key":"2026040113120620700_ref139","first-page":"292","article-title":"Tamer: Training an agent manually via evaluative reinforcement","volume-title":"Development and Learning, 2008. ICDL 2008. 7th IEEE International Conference on","author":"Knox","year":"2008"},{"key":"2026040113120620700_ref140","doi-asserted-by":"crossref","first-page":"5","DOI":"10.65109\/MCUE9477","article-title":"Combining manual feedback with subsequent MDP reward signals for reinforcement learning","volume-title":"Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1","author":"Knox","year":"2010"},{"key":"2026040113120620700_ref141","doi-asserted-by":"crossref","first-page":"475","DOI":"10.65109\/PAJO3896","article-title":"Reinforcement learning from si multaneous human and MDP reward","volume-title":"Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1","author":"Knox","year":"2012"},{"key":"2026040113120620700_ref142","doi-asserted-by":"publisher","first-page":"9","DOI":"10.1145\/1597735.1597738","article-title":"Interactively Shaping Agents via Human Reinforcement: The TAMER Framework","volume-title":"Proceedings of the Fifth International Conference on Knowledge Capture. K-CAP \u201909","author":"Knox","year":"2009"},{"key":"2026040113120620700_ref143","doi-asserted-by":"publisher","first-page":"191","DOI":"10.1145\/2449396.2449422","article-title":"Learning Non-Myopically from Human-Generated Reward","volume-title":"Proceedings of the 2013 International Conference on Intelligent User Interfaces. IUI \u201913","author":"Knox","year":"2013"},{"key":"2026040113120620700_ref144","doi-asserted-by":"crossref","first-page":"24","DOI":"10.1016\/j.artint.2015.03.009","article-title":"Framing reinforcement learning from human reward: Reward positivity, temporal discounting, episodicity, and performance","volume":"225","author":"Knox","year":"2015","journal-title":"Artificial Intelligence"},{"key":"2026040113120620700_ref145","doi-asserted-by":"publisher","first-page":"460","DOI":"10.1007\/978-3-319-02675-6_46","article-title":"Training a Robot via Human Feedback: A Case Study","volume-title":"Proceedings of the 5th International Conference on Social Robotics-Volume 8239. ICSR 2013","author":"Knox","year":"2013"},{"key":"2026040113120620700_ref146","unstructured":"Kober, J. and J.Peters. (2008). \u201cPolicy Search for Motor Primitives in Robotics\u201d. 21. url: https:\/\/proceedings.neurips.cc\/paper\/2008\/file\/7647966b7343c29048673252e490f736-Paper.pdf."},{"key":"2026040113120620700_ref147","doi-asserted-by":"publisher","DOI":"10.3389\/frobt.2020.00097","article-title":"Multi-Channel Interactive Reinforcement Learning for Sequential Tasks","volume":"7","author":"Koert","year":"2020","journal-title":"Frontiers in Robotics and AI"},{"issue":"4","key":"2026040113120620700_ref148","doi-asserted-by":"crossref","first-page":"3719","DOI":"10.1109\/LRA.2019.2928760","article-title":"Learning intention aware online adaptation of movement primitives","volume":"4","author":"Koert","year":"2019","journal-title":"IEEE Robotics and Automation Letters"},{"key":"2026040113120620700_ref149","article-title":"Interaction consider ations in learning from humans","volume-title":"International Joint Conference on Artificial Intelligence (IJCAI)","author":"Koppol","year":"2021"},{"issue":"1","key":"2026040113120620700_ref150","doi-asserted-by":"publisher","first-page":"44","DOI":"10.1109\/TCDS.2016.2628365","article-title":"Learning From Explanations Using Sentiment and Advice in RL","volume":"9","author":"Krening","year":"2017","journal-title":"IEEE Transactions on Cognitive and Developmental Systems"},{"issue":"2","key":"2026040113120620700_ref151","doi-asserted-by":"crossref","first-page":"2163","DOI":"10.1109\/LRA.2021.3060414","article-title":"Ac tive learning of Bayesian probabilistic movement primitives","volume":"6","author":"Kulak","year":"2021","journal-title":"IEEE Robotics and Automation Letters"},{"key":"2026040113120620700_ref152","doi-asserted-by":"crossref","first-page":"3075","DOI":"10.1145\/2556288.2557238","article-title":"Structured labeling for facilitating concept evolution in machine learning","volume-title":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","author":"Kulesza","year":"2014"},{"key":"2026040113120620700_ref153","volume-title":"Observing the user experience: a practitioner\u2019s guide to user research","author":"Kuniavsky","year":"2003"},{"key":"2026040113120620700_ref154","doi-asserted-by":"publisher","first-page":"358","DOI":"10.1109\/ICRA.2017.7989046","article-title":"Comparing human-centric and robot-centric sampling for robot deep learning from demonstrations","volume-title":"2017 IEEE International Conference on Robotics and Automation (ICRA)","author":"Laskey","year":"2017"},{"key":"2026040113120620700_ref155","first-page":"143","article-title":"Dart: Noise injection for robust imitation learning","volume-title":"Conference on robot learning","author":"Laskey","year":"2017"},{"key":"2026040113120620700_ref156","first-page":"462","article-title":"SHIV: Reducing supervi sor burden in dagger using support vectors for efficient learning from demonstrations in high dimensional state spaces","volume-title":"2016 IEEE In ternational Conference on Robotics and Automation (ICRA)","author":"Laskey","year":"2016"},{"key":"2026040113120620700_ref157","first-page":"2917","article-title":"Hierarchical imitation and reinforcement learning","volume-title":"In ternational conference on machine learning","author":"Le","year":"2018"},{"key":"2026040113120620700_ref158","article-title":"A survey of robot learning from demonstrations for human-robot collaboration","author":"Lee","year":"2017"},{"key":"2026040113120620700_ref159","first-page":"11","article-title":"To Follow or not to Follow: Selective Imitation Learning from Observations","volume":"100","author":"Lee","year":"2020","journal-title":"Proceedings of the Conference on Robot Learning"},{"key":"2026040113120620700_ref160","doi-asserted-by":"publisher","first-page":"549","DOI":"10.1007\/978-3-642-25085-9_65","article-title":"Teaching a Robot to Perform Task through Imitation and On-Line Feedback","volume-title":"Proceedings of the 16th Iberoamerican Congress Conference on Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. CIARP\u201911","author":"Le\u00f3n","year":"2011"},{"key":"2026040113120620700_ref161","article-title":"Offline reinforce ment learning: Tutorial, review, and perspectives on open problems","author":"Levine","year":"2020"},{"key":"2026040113120620700_ref162","first-page":"2618","article-title":"Genetic program ming approach to the construction of a neural network for control of a walking robot.","author":"Lewis","year":"1992","journal-title":"ICRA"},{"issue":"4","key":"2026040113120620700_ref163","doi-asserted-by":"crossref","first-page":"337","DOI":"10.1109\/THMS.2019.2912447","article-title":"Human-centered reinforcement learning: A survey","volume":"49","author":"Li","year":"2019","journal-title":"IEEE Transactions on Human Machine Systems"},{"issue":"5","key":"2026040113120620700_ref164","doi-asserted-by":"crossref","first-page":"826","DOI":"10.1007\/s10458-015-9308-2","article-title":"Using infor mative behavior to increase engagement while learning from human reward","volume":"30","author":"Li","year":"2016","journal-title":"Autonomous agents and multi-agent systems"},{"key":"2026040113120620700_ref165","doi-asserted-by":"publisher","DOI":"10.15607\/RSS.2019.XV.005","article-title":"OIL: Observational Imitation Learning","volume-title":"Proceedings of Robotics: Science and Systems","author":"Li","year":"2019"},{"issue":"10","key":"2026040113120620700_ref166","doi-asserted-by":"crossref","first-page":"4394","DOI":"10.1109\/TIT.2006.881731","article-title":"On divergences and informations in statistics and information theory","volume":"52","author":"Liese","year":"2006","journal-title":"IEEE Transactions on Informa tion Theory"},{"key":"2026040113120620700_ref167","doi-asserted-by":"crossref","first-page":"120757","DOI":"10.1109\/ACCESS.2020.3006254","article-title":"A Review on Interactive Reinforcement Learning From Human Social Feedback","volume":"8","author":"Lin","year":"2020","journal-title":"IEEE Access"},{"issue":"3","key":"2026040113120620700_ref168","doi-asserted-by":"publisher","first-page":"293","DOI":"10.1007\/BF00992699","article-title":"Self-Improving Reactive Agents Based on Reinforce ment Learning, Planning and Teaching","volume":"8","author":"Lin","year":"1992","journal-title":"Machine Learning"},{"issue":"2","key":"2026040113120620700_ref169","doi-asserted-by":"publisher","first-page":"2740","DOI":"10.1109\/LRA.2022.3143518","article-title":"Efficient and Interpretable Robot Manipulation With Graph Neural Networks","volume":"7","author":"Lin","year":"2022","journal-title":"IEEE Robotics and Automation Letters"},{"issue":"No. 1","key":"2026040113120620700_ref170","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v28i1.8839","article-title":"A Strategy-Aware Technique for Learning Behaviors from Discrete Human Feedback","volume":"28","author":"Loftin","year":"2014"},{"issue":"1","key":"2026040113120620700_ref171","doi-asserted-by":"crossref","first-page":"30","DOI":"10.1007\/s10458-015-9283-7","article-title":"Learning behaviors via human-delivered discrete feedback: modeling implicit feedback strategies to speed up learning","volume":"30","author":"Loftin","year":"2016","journal-title":"Autonomous agents and multi-agent systems"},{"key":"2026040113120620700_ref172","article-title":"Doing Right by Not Doing Wrong in Human-Robot Collaboration","author":"Londo\u00f1o","year":"2022"},{"issue":"1","key":"2026040113120620700_ref173","doi-asserted-by":"publisher","first-page":"20","DOI":"10.1177\/02783649211050958","article-title":"Physical interaction as communication: Learning robot objectives online from human corrections","volume":"41","author":"Losey","year":"2022","journal-title":"The International Journal of Robo tics Research"},{"key":"2026040113120620700_ref174","doi-asserted-by":"publisher","first-page":"154","DOI":"10.1109\/ICEC.1998.699493","article-title":"Evolutionary robotics-a children\u2019s game","volume-title":"1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360)","author":"Lund","year":"1998"},{"key":"2026040113120620700_ref175","article-title":"Robust Multi-Modal Policies for Industrial Assembly via Reinforcement Learning and Demon strations: A Large-Scale Study","author":"Luo","year":"2021","journal-title":"Robotics: Science and Systems XVII, 2021"},{"key":"2026040113120620700_ref176","first-page":"2285","article-title":"Interactive Learning from Policy-Dependent Human Feedback","volume-title":"Proceedings of the 34th International Conference on Machine Learning","author":"MacGlashan","year":"2017"},{"key":"2026040113120620700_ref177","first-page":"6","article-title":"Training an agent to ground commands with reward and punishment","volume-title":"Proceedings of the AAAI Machine Learning for Interactive Systems Workshop","author":"MacGlashan","year":"2014"},{"key":"2026040113120620700_ref178","article-title":"Incorporating Advice into Agents That Learn from Reinforcements","volume-title":"University of Wisconsin-Madison. Computer Sciences Department","author":"Maclin","year":"1994"},{"key":"2026040113120620700_ref179","first-page":"37","article-title":"Active Incremental Learning of Robot Movement Primitives","volume":"78","author":"Maeda","year":"2017","journal-title":"Proceedings of the 1st Annual Conference on Robot Learning"},{"key":"2026040113120620700_ref180","unstructured":"Mahmood, A.. (2017). \u201cIncremental off-policy reinforcement learning algorithms\u201d."},{"key":"2026040113120620700_ref181","doi-asserted-by":"publisher","first-page":"1048","DOI":"10.1109\/IROS40897.2019.8968114","article-title":"Scaling Robot Supervision to Hundreds of Hours with RoboTurk: Robotic Manipulation Dataset through Human Reasoning and Dexterity","volume-title":"2019 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS)","author":"Mandlekar","year":"2019"},{"key":"2026040113120620700_ref182","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2012.06733","article-title":"Human-in-the-Loop Imitation Learning using Remote Teleoperation","author":"Mandlekar","year":"2020"},{"key":"2026040113120620700_ref183","first-page":"879","article-title":"ROBOTURK: A Crowdsourcing Platform for Robotic Skill Learning through Imitation","volume-title":"Proceedings of The 2nd Conference on Robot Learning","author":"Mandlekar","year":"2018"},{"issue":"1","key":"2026040113120620700_ref184","doi-asserted-by":"crossref","first-page":"406","DOI":"10.1109\/LRA.2021.3128237","article-title":"Human-Feedback Shield Synthesis for Perceived Safety in Deep Reinforcement Learning","volume":"7","author":"Marta","year":"2021","journal-title":"IEEE Robotics and Automation Letters"},{"key":"2026040113120620700_ref185","article-title":"Programs with Common Sense","author":"McCarthy","year":"1958","journal-title":"RLE and MIT computation center Cambridge, MA, USA"},{"key":"2026040113120620700_ref186","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.1709.06166","article-title":"DropoutDAgger: A Bayesian Approach to Safe Imitation Learning","author":"Menda","year":"2017"},{"key":"2026040113120620700_ref187","doi-asserted-by":"publisher","first-page":"5041","DOI":"10.1109\/IROS40897.2019.8968287","article-title":"EnsembleDAgger: A Bayesian Approach to Safe Imitation Learning","volume-title":"2019 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS)","author":"Menda","year":"2019"},{"key":"2026040113120620700_ref188","unstructured":"Mericli, C.. (2011). \u201cMulti-Resolution Model Plus Correction Paradigm for Task and Skill Refinement on Autonomous Robots\u201d. PhD thesis. Citeseer."},{"key":"2026040113120620700_ref189","first-page":"194","article-title":"Improving Biped Walk Stability Using Real-Time Corrective Human Feedback","author":"Meri\u00e7li","year":"2011","journal-title":"RoboCup 2010: Robot Soccer World Cup XIV"},{"key":"2026040113120620700_ref190","doi-asserted-by":"publisher","first-page":"334","DOI":"10.1109\/ICHR.2010.5686326","article-title":"Complementary hu manoid behavior shaping using corrective demonstration","volume-title":"2010 10th IEEE-RAS International Conference on Humanoid Robots","author":"Meri\u00e7li","year":"2010"},{"issue":"2","key":"2026040113120620700_ref191","doi-asserted-by":"publisher","first-page":"16","DOI":"10.5772\/10575","article-title":"Task Refinement for Autonomous Robots Using Complementary Corrective Human Feedback","volume":"8","author":"Meri\u00e7li","year":"2011","journal-title":"International Journal of Advanced Robotic Systems"},{"issue":"3","key":"2026040113120620700_ref192","doi-asserted-by":"crossref","first-page":"6052","DOI":"10.1109\/LRA.2022.3165531","article-title":"Learning to Pick at Non-Zero-Velocity From Interactive Demonstrations","volume":"7","author":"M\u00e9sz\u00e1ros","year":"2022","journal-title":"IEEE Robotics and Automation Letters"},{"issue":"4","key":"2026040113120620700_ref193","doi-asserted-by":"crossref","first-page":"911","DOI":"10.1109\/TRO.2008.926867","article-title":"Adapting robot behavior for human\u2013robot interaction","volume":"24","author":"Mitsunaga","year":"2008","journal-title":"IEEE Transactions on Robotics"},{"key":"2026040113120620700_ref194","article-title":"A Multidisciplinary survey and framework for design and evaluation of explainable AI systems. arXiv","volume-title":"Human-Computer Interaction","author":"Mohseni","year":"2019"},{"issue":"9","key":"2026040113120620700_ref195","doi-asserted-by":"crossref","first-page":"902","DOI":"10.1007\/s11263-018-1073-7","article-title":"Sim4cv: A photo-realistic simulator for computer vision applica tions","volume":"126","author":"M\u00fcller","year":"2018","journal-title":"International Journal of Computer Vision"},{"key":"2026040113120620700_ref196","first-page":"342","article-title":"Learning Multi modal Rewards from Rankings","volume":"164","author":"Myers","year":"2022","journal-title":"Proceedings of the 5th Confer ence on Robot Learning"},{"key":"2026040113120620700_ref197","doi-asserted-by":"crossref","DOI":"10.3389\/frobt.2021.584075","article-title":"Reinforcement Learning with Human Advice: A Survey","volume":"8","author":"Najar","year":"2021","journal-title":"Frontiers in Robotics and AI"},{"key":"2026040113120620700_ref198","doi-asserted-by":"crossref","first-page":"261","DOI":"10.1109\/ROMAN.2016.7745140","article-title":"Training a robot with evaluative feedback and unlabeled guidance signals","author":"Najar","year":"2016","journal-title":"Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on"},{"issue":"2","key":"2026040113120620700_ref199","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1007\/s10458-020-09459-6","article-title":"Interactively shaping robot behaviour with unlabeled human instructions","volume":"34","author":"Najar","year":"2020","journal-title":"Autonomous Agents and Multi-Agent Systems"},{"key":"2026040113120620700_ref200","doi-asserted-by":"crossref","first-page":"41","DOI":"10.7551\/mitpress\/3676.003.0003","volume-title":"Imitation in animals and artifacts","author":"Nehaniv","year":"2002"},{"key":"2026040113120620700_ref201","first-page":"278","article-title":"Policy Invariance un der Reward Transformations: Theory and Application to Reward Shaping","volume":"99","author":"Ng","year":"1999","journal-title":"Icml"},{"key":"2026040113120620700_ref202","first-page":"663","article-title":"Algorithms for inverse reinforcement learning.","author":"Ng","year":"2000","journal-title":"Icml"},{"issue":"3","key":"2026040113120620700_ref203","first-page":"12","article-title":"Efficient interactive multiclass learning from binary feedback","volume":"4","author":"Ngo","year":"2014","journal-title":"ACM Transactions on Interactive Intelligent Systems (TiiS)"},{"key":"2026040113120620700_ref204","first-page":"241","article-title":"Natural Methods for Robot Task Learning: Instructive Demonstrations, Generalization and Practice","volume-title":"Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems. AAMAS \u201903","author":"Nicolescu","year":"2003"},{"key":"2026040113120620700_ref205","doi-asserted-by":"crossref","first-page":"306","DOI":"10.1109\/CIRA.2003.1222107","article-title":"Trajectory generation for human-friendly behavior of partner robot using fuzzy evaluating interactive genetic algorithm","volume":"1","author":"Nojima","year":"2003","journal-title":"Proceedings 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation. Computational Intelligence in Robotics and Automa tion for the New Millennium (Cat. No. 03EX694)"},{"key":"2026040113120620700_ref206","doi-asserted-by":"crossref","DOI":"10.1561\/9781680834116","article-title":"An algorithmic perspective on imitation learning","volume-title":"arXiv preprint arXiv:1811.06711","author":"Osa","year":"2018"},{"key":"2026040113120620700_ref207","doi-asserted-by":"crossref","DOI":"10.15607\/RSS.2019.XV.023","article-title":"Learning Reward Functions by Integrating Human Demonstrations and Preferences","volume-title":"Robotics: Science and Systems","author":"Palan","year":"2019"},{"key":"2026040113120620700_ref208","first-page":"2616","article-title":"Prob abilistic movement primitives","author":"Paraschos","year":"2013","journal-title":"Advances in neural information processing systems"},{"key":"2026040113120620700_ref209","doi-asserted-by":"crossref","first-page":"1497","DOI":"10.1109\/ICRA.2017.7989179","article-title":"Duckietown: an open, inexpensive and flexible platform for autonomy education and research","volume-title":"2017 IEEE International Conference on Robotics and Automation (ICRA)","author":"Paull","year":"2017"},{"key":"2026040113120620700_ref210","doi-asserted-by":"crossref","DOI":"10.65109\/DWCG3683","article-title":"A need for speed: Adapting agent action speed to improve task learning from non-expert humans","volume-title":"Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems","author":"Peng","year":"2016"},{"key":"2026040113120620700_ref211","doi-asserted-by":"crossref","DOI":"10.1016\/j.engappai.2022.105277","article-title":"Visually-guided motion planning for autonomous driving from interactive demonstrations","volume":"116","author":"P\u00e9rez-Dattari","year":"2022","journal-title":"Engineering Applications of Artificial Intelligence"},{"issue":"2","key":"2026040113120620700_ref212","doi-asserted-by":"crossref","first-page":"46","DOI":"10.1109\/MRA.2020.2983649","article-title":"Interactive learning of temporal features for control: Shaping policies and state representations from human feedback","volume":"27","author":"P\u00e9rez-Dattari","year":"2020","journal-title":"IEEE Robotics&Automation Magazine"},{"key":"2026040113120620700_ref213","first-page":"353","article-title":"Interactive learning with corrective feedback for policies based on deep neural networks","author":"P\u00e9rez-Dattari","year":"2018","journal-title":"International Symposium on Experimental Robotics"},{"key":"2026040113120620700_ref214","doi-asserted-by":"crossref","first-page":"7611","DOI":"10.1109\/ICRA.2019.8793675","article-title":"Continuous control for high-dimensional state spaces: An interactive learning approach","volume-title":"2019 International Conference on Robotics and Automation (ICRA)","author":"P\u00e9rez-Dattari","year":"2019"},{"key":"2026040113120620700_ref215","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/ICORR.2011.5975338","article-title":"Online human training of a myoelectric prosthesis controller via actor-critic reinforcement learning","volume-title":"2011 IEEE International Conference on Rehabilitation Robotics","author":"Pilarski","year":"2011"},{"key":"2026040113120620700_ref216","first-page":"11763","article-title":"Exploring data aggregation in policy learning for vision-based urban autonomous driving","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Prakash","year":"2020"},{"key":"2026040113120620700_ref217","unstructured":"Precup, D.. (2000). Temporal abstraction in reinforcement learning. University of Massachusetts Amherst."},{"key":"2026040113120620700_ref218","article-title":"Clicker training for dogs. Waltham, MA","author":"Pryor","year":"1999"},{"key":"2026040113120620700_ref219","volume-title":"Gaussian Processes for Machine Learning","author":"Rasmussen","year":"2006"},{"key":"2026040113120620700_ref220","doi-asserted-by":"crossref","first-page":"297","DOI":"10.1146\/annurev-control-100819-063206","article-title":"Recent advances in robot learning from demonstration","volume":"3","author":"Ravichandar","year":"2020","journal-title":"Annual Review of Control, Robotics, and Autonomous Systems"},{"key":"2026040113120620700_ref221","unstructured":"Reddy, S., A. D.Dragan, and S.Levine. (2019). \u201cSQIL: Imitation Learn ing via Regularized Behavioral Cloning\u201d. CoRR. abs\/1905.11108. url: http:\/\/arxiv.org\/abs\/1905.11108."},{"key":"2026040113120620700_ref222","doi-asserted-by":"crossref","first-page":"1321","DOI":"10.1109\/IROS.2013.6696520","article-title":"V-REP: A versatile and scalable robot simulation framework","volume-title":"2013 IEEE\/RSJ International Conference on Intelligent Robots and Systems","author":"Rohmer","year":"2013"},{"key":"2026040113120620700_ref223","article-title":"Reinforcement and imitation learn ing via interactive no-regret learning","volume-title":"arXiv preprint arXiv:1406.5979","author":"Ross","year":"2014"},{"key":"2026040113120620700_ref224","first-page":"661","article-title":"Efficient reductions for imitation learn ing","volume-title":"Proceedings of the thirteenth international conference on artificial intelligence and statistics","author":"Ross","year":"2010"},{"key":"2026040113120620700_ref225","first-page":"627","article-title":"A reduction of imitation learning and structured prediction to no-regret online learning","volume-title":"Proceedings of the fourteenth international conference on artificial intelligence and statistics","author":"Ross","year":"2011"},{"key":"2026040113120620700_ref226","doi-asserted-by":"crossref","DOI":"10.1002\/9781118631980","volume-title":"Simulation and the Monte Carlo method","author":"Rubinstein","year":"2016"},{"key":"2026040113120620700_ref227","volume-title":"On-line Q-learning using connectionist systems","author":"Rummery","year":"1994"},{"key":"2026040113120620700_ref228","author":"Russell","year":"2016"},{"key":"2026040113120620700_ref229","volume-title":"Proceedings of Robotics: Science and Systems","author":"Sadigh","year":"2017"},{"issue":"3","key":"2026040113120620700_ref230","doi-asserted-by":"crossref","first-page":"210","DOI":"10.1147\/rd.33.0210","article-title":"Some Studies in Machine Learning Using the Game of Checkers","volume":"3","author":"Samuel","year":"1959","journal-title":"IBM Journal of Research and Development"},{"issue":"6","key":"2026040113120620700_ref231","doi-asserted-by":"crossref","first-page":"601","DOI":"10.1147\/rd.116.0601","article-title":"Some Studies in Machine Learning Using the Game of Checkers. II\u2013Recent Progress","volume":"11","author":"Samuel","year":"1967","journal-title":"IBM Journal of Research and Development"},{"key":"2026040113120620700_ref232","first-page":"1109","article-title":"Efficiently Guiding Imitation Learning Agents with Human Gaze","volume-title":"Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems","author":"Saran","year":"2021"},{"key":"2026040113120620700_ref233","article-title":"Dynamic Movement Primitives in Robotics: A Tutorial Survey","author":"Saveriano","year":"2021"},{"key":"2026040113120620700_ref234","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV.2019.00943","article-title":"Habitat: A Platform for Embodied AI Research","volume-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV)","author":"Savva","year":"2019"},{"key":"2026040113120620700_ref235","doi-asserted-by":"crossref","first-page":"803","DOI":"10.1109\/CDC40024.2019.9029503","article-title":"Deep Re inforcement Learning with Feedback-based Exploration","volume-title":"2019 IEEE 58th Conference on Decision and Control (CDC)","author":"Scholten","year":"2019"},{"key":"2026040113120620700_ref236","first-page":"1052","article-title":"Directing Policy Search with Interactively Taught Via-Points","volume-title":"Proceedings of the 2016 International Conference on Autonomous Agents&Mul tiagent Systems. AAMAS \u201916","author":"Schroecker","year":"2016"},{"issue":"9","key":"2026040113120620700_ref237","doi-asserted-by":"crossref","first-page":"1251","DOI":"10.1177\/0278364914528132","article-title":"Motion planning with sequential convex optimization and convex collision checking","volume":"33","author":"Schulman","year":"2014","journal-title":"The International Journal of Robotics Research"},{"key":"2026040113120620700_ref238","article-title":"Active learning literature survey","author":"Settles","year":"2009"},{"key":"2026040113120620700_ref239","doi-asserted-by":"crossref","first-page":"621","DOI":"10.1007\/978-3-319-67361-5_40","article-title":"Airsim: High f idelity visual and physical simulation for autonomous vehicles","volume-title":"Field and service robotics","author":"Shah","year":"2018"},{"issue":"2-3","key":"2026040113120620700_ref240","doi-asserted-by":"publisher","first-page":"217","DOI":"10.1177\/0278364919897133","article-title":"INGRESS: Interac tive visual grounding of referring expressions","volume":"39","author":"Shridhar","year":"2020","journal-title":"The International Journal of Robotics Research"},{"issue":"7587","key":"2026040113120620700_ref241","doi-asserted-by":"publisher","first-page":"484","DOI":"10.1038\/nature16961","article-title":"Mastering the Game of Go with Deep Neural Networks and Tree Search","volume":"529","author":"Silver","year":"2016","journal-title":"Nature"},{"key":"2026040113120620700_ref242","first-page":"907","article-title":"S4RL: Surprisingly Sim ple Self-Supervision for Offline Reinforcement Learning in Robotics","volume-title":"Proceedings of the 5th Conference on Robot Learning","author":"Sinha","year":"2022"},{"key":"2026040113120620700_ref243","first-page":"535","article-title":"Designing biomorphs with an interactive genetic algorithm.","author":"Smith","year":"1991","journal-title":"ICGA"},{"key":"2026040113120620700_ref244","article-title":"Learning from interventions","author":"Spencer","year":"2020","journal-title":"Robotics: Science and Systems (RSS)"},{"issue":"No. 3","key":"2026040113120620700_ref245","doi-asserted-by":"crossref","first-page":"543","DOI":"10.1111\/cgf.14329","article-title":"A Survey of Human-Centered Evaluations in Human-Centered Machine Learning","volume":"40","author":"Sperrle","year":"2021","journal-title":"Computer Graphics Forum"},{"key":"2026040113120620700_ref246","doi-asserted-by":"crossref","first-page":"424","DOI":"10.1109\/ICMLA.2011.37","article-title":"Augmented reinforcement learning for interaction with non-expert humans in agent domains","volume-title":"Machine Learning and Applications and Workshops (ICMLA), 2011 10th International Conference on","author":"Sridharan","year":"2011"},{"issue":"2","key":"2026040113120620700_ref247","doi-asserted-by":"crossref","DOI":"10.18489\/sacj.v32i2.746","article-title":"A survey of benchmarks for reinforcement learning algorithms","volume":"32","author":"Stapelberg","year":"2020","journal-title":"South African Computer Journal"},{"key":"2026040113120620700_ref248","doi-asserted-by":"crossref","first-page":"60","DOI":"10.1016\/j.neunet.2015.05.005","article-title":"Many regression algorithms, one unified model: A review","volume":"69","author":"Stulp","year":"2015","journal-title":"Neural Networks"},{"key":"2026040113120620700_ref249","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/ROMAN.2011.6005223","article-title":"Effect of human guidance and state space size on interactive reinforcement learning","volume-title":"RO-MAN, 2011 IEEE","author":"Suay","year":"2011"},{"key":"2026040113120620700_ref250","first-page":"447","article-title":"Explo ration from demonstration for interactive reinforcement learning","volume-title":"Proceedings of the 2016 International Conference on Autonomous Agents&Multiagent Systems","author":"Subramanian","year":"2016"},{"key":"2026040113120620700_ref251","volume-title":"Introduction to statistical machine learning","author":"Sugiyama","year":"2015"},{"key":"2026040113120620700_ref252","first-page":"3309","article-title":"Deeply AggreVaTeD: Differentiable Imitation Learning for Sequential Prediction","volume-title":"Proceedings of the 34th International Conference on Machine Learning","author":"Sun","year":"2017"},{"key":"2026040113120620700_ref253","volume-title":"Reinforcement learning: An introduction","author":"Sutton","year":"2018"},{"key":"2026040113120620700_ref254","article-title":"Habitat 2.0: Training Home Assistants to Rearrange their Habitat","volume-title":"Thirty-Fifth Conference on Neural Information Processing Systems","author":"Szot","year":"2021"},{"key":"2026040113120620700_ref255","first-page":"41","article-title":"Interactive evolutionary computation","volume-title":"Pro ceedings of the International Conference on Soft Computing and Information\/Intelligent Systems","author":"Takagi","year":"1998"},{"issue":"9","key":"2026040113120620700_ref256","doi-asserted-by":"crossref","first-page":"1275","DOI":"10.1109\/5.949485","article-title":"Interactive evolutionary computation: Fusion of the capabilities of EC optimization and human evaluation","volume":"89","author":"Takagi","year":"2001","journal-title":"Proceedings of the IEEE"},{"key":"2026040113120620700_ref257","first-page":"483","article-title":"Dynamic Reward Shaping: Training a Robot by Voice","author":"Tenorio-Gonzalez","year":"2010","journal-title":"Advances in Artificial Intelligence\u2013 IBERAMIA 2010"},{"key":"2026040113120620700_ref258","article-title":"Adding guidance to interactive reinforcement learning","volume-title":"Proceedings of the Twentieth Conference on Artificial Intelligence (AAAI)","author":"Thomaz","year":"2006"},{"key":"2026040113120620700_ref259","doi-asserted-by":"publisher","first-page":"720","DOI":"10.1109\/ROMAN.2007.4415180","article-title":"Asymmetric Interpretations of Positive and Negative Human Feedback for a Social Learning Agent","author":"Thomaz","year":"2007","journal-title":"RO-MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication"},{"key":"2026040113120620700_ref260","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1109\/DEVLRN.2007.4354078","article-title":"Robot learning via socially guided exploration","author":"Thomaz","year":"2007"},{"key":"2026040113120620700_ref261","doi-asserted-by":"publisher","first-page":"15","DOI":"10.1145\/1514095.1514101","article-title":"Learning about objects with human teachers","volume-title":"2009 4th ACM\/IEEE International Conference on Human-Robot Interaction (HRI)","author":"Thomaz","year":"2009"},{"key":"2026040113120620700_ref262","first-page":"1000","article-title":"Reinforcement learning with human teachers: Evidence of feedback and guidance with im plications for learning performance","volume":"6","author":"Thomaz","year":"2006","journal-title":"Aai"},{"key":"2026040113120620700_ref263","first-page":"9","article-title":"Real-Time Interactive Reinforcement Learning for Robots","author":"Thomaz","year":"2005","journal-title":"AAAI 2005 Workshop on Human Comprehensible Machine Learning"},{"key":"2026040113120620700_ref264","doi-asserted-by":"publisher","first-page":"5026","DOI":"10.1109\/IROS.2012.6386109","article-title":"MuJoCo: A physics engine for model-based control","volume-title":"2012 IEEE\/RSJ International Conference on Intelligent Robots and Systems","author":"Todorov","year":"2012"},{"key":"2026040113120620700_ref265","doi-asserted-by":"crossref","DOI":"10.24963\/ijcai.2018\/687","article-title":"Behavioral cloning from observation","author":"Torabi","year":"2018"},{"key":"2026040113120620700_ref266","first-page":"261","article-title":"A practical comparison of three robot learning from demonstration algorithms","volume-title":"2012 7th ACM\/IEEE International Conference on Human-Robot Interaction (HRI)","author":"Toris","year":"2012"},{"key":"2026040113120620700_ref267","first-page":"193","article-title":"Two Kinds of Training Information for Evaluation Function Learning","author":"Utgoff","year":"1991","journal-title":"Computer Science Department Faculty Publication Series"},{"issue":"1","key":"2026040113120620700_ref268","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/S0968-090X(96)00025-3","article-title":"A simple procedure for the assessment of acceptance of advanced transport telematics","volume":"5","author":"Van Der Laan","year":"1997","journal-title":"Transportation Research Part C: Emerging Technologies"},{"key":"2026040113120620700_ref269","article-title":"Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards","author":"Vecerik","year":"2017","journal-title":"arXiv preprint arXiv:1707.08817"},{"key":"2026040113120620700_ref270","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/DevLrn.2012.6400849","article-title":"Reinforcement learning combined with human feedback in continuous state and action spaces","volume-title":"2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL)","author":"Vien","year":"2012"},{"issue":"2","key":"2026040113120620700_ref271","doi-asserted-by":"crossref","first-page":"267","DOI":"10.1007\/s10489-012-0412-6","article-title":"Learning via human feedback in continuous state and action spaces","volume":"39","author":"Vien","year":"2013","journal-title":"Applied intelligence"},{"key":"2026040113120620700_ref272","doi-asserted-by":"crossref","first-page":"77","DOI":"10.3389\/frobt.2018.00077","article-title":"A user study on robot skill learning without a cost function: Optimization of dynamic movement primitives via naive user feedback","volume":"5","author":"Vollmer","year":"2018","journal-title":"Frontiers in Robotics and AI"},{"issue":"3","key":"2026040113120620700_ref273","doi-asserted-by":"crossref","first-page":"281","DOI":"10.1006\/ijhc.2001.0499","article-title":"Inter active machine learning: letting users build classifiers","volume":"55","author":"Ware","year":"2001","journal-title":"International Journal of Human-Computer Studies"},{"key":"2026040113120620700_ref274","doi-asserted-by":"crossref","DOI":"10.1609\/aaai.v32i1.11485","article-title":"Deep tamer: Interactive agent shaping in high-dimensional state spaces","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence","author":"Warnell","year":"2018"},{"issue":"3","key":"2026040113120620700_ref275","first-page":"279","article-title":"Q-learning","volume":"8","author":"Watkins","year":"1992","journal-title":"Machine learning"},{"key":"2026040113120620700_ref276","unstructured":"Watkins, C. J. C. H.. (1989). \u201cLearning from delayed rewards\u201d."},{"issue":"8","key":"2026040113120620700_ref277","doi-asserted-by":"crossref","first-page":"29","DOI":"10.1109\/MC.2020.2996416","article-title":"Interactive Artificial Intelligence: Designing for the\" Two Black Boxes\" Problem","volume":"53","author":"Wenskovitch","year":"2020","journal-title":"Computer"},{"key":"2026040113120620700_ref278","first-page":"607","article-title":"A Complexity Analysis of Cooperative Mech anisms in Reinforcement Learning.","author":"Whitehead","year":"1991","journal-title":"AAAI"},{"key":"2026040113120620700_ref279","first-page":"353","article-title":"Learning Reward Functions from Scale Feedback","volume-title":"Proceedings of the 5th Conference on Robot Learning","author":"Wilde","year":"2022"},{"issue":"6","key":"2026040113120620700_ref280","doi-asserted-by":"publisher","first-page":"651","DOI":"10.1177\/0278364920910802","article-title":"Improving user specifications for robot behavior through active preference learning: Framework and evaluation","volume":"39","author":"Wilde","year":"2020","journal-title":"The International Journal of Robotics Research"},{"key":"2026040113120620700_ref281","doi-asserted-by":"publisher","first-page":"4430","DOI":"10.1109\/ICRA.2018.8460937","article-title":"Learning to Parse Natural Language to Grounded Reward Functions with Weak Supervision","volume-title":"2018 IEEE International Conference on Robotics and Automation (ICRA)","author":"Williams","year":"2018"},{"key":"2026040113120620700_ref282","first-page":"1133","article-title":"A Bayesian Approach for Policy Learning from Trajectory Preference Queries","volume-title":"Proceedings of the 25th International Conference on Neural Information Pro cessing Systems-Volume 1. NIPS\u201912","author":"Wilson","year":"2012"},{"key":"2026040113120620700_ref283","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.1903.05216","article-title":"Learning Gaussian Policies from Corrective Human Feedback","author":"Wout","year":"2019"},{"issue":"1","key":"2026040113120620700_ref284","doi-asserted-by":"publisher","first-page":"56","DOI":"10.5898\/JHRI.2.1.Wrede","article-title":"A User Study on Kinesthetic Teaching of Redundant Robots in Task and Configuration Space","volume":"2","author":"Wrede","year":"2013","journal-title":"J. Hum.-Robot Interact."},{"key":"2026040113120620700_ref285","article-title":"A Survey of Human-in-the-loop for Machine Learning","author":"Wu","year":"2021"},{"key":"2026040113120620700_ref286","doi-asserted-by":"crossref","first-page":"2089","DOI":"10.1109\/IROS.2016.7759328","article-title":"Watch this: Scalable cost-function learning for path planning in urban environments","volume-title":"2016 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS)","author":"Wulfmeier","year":"2016"},{"key":"2026040113120620700_ref287","first-page":"1512","article-title":"FRESH: Interactive Reward Shaping in High Dimensional State Spaces Using Human Feedback","volume-title":"Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems. AAMAS \u201920","author":"Xiao","year":"2020"},{"key":"2026040113120620700_ref288","first-page":"1","article-title":"Accelerating human-in-the-loop machine learning: Challenges and opportunities","author":"Xin","year":"2018","journal-title":"Proceedings of the second workshop on data management for end-to-end machine learning"},{"key":"2026040113120620700_ref289","doi-asserted-by":"publisher","first-page":"1805","DOI":"10.1109\/IROS40897.2019.8968278","article-title":"Learning Actions from Human Demonstration Video for Robotic Manipula tion","volume-title":"2019 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS)","author":"Yang","year":"2019"},{"issue":"1","key":"2026040113120620700_ref290","doi-asserted-by":"publisher","first-page":"41","DOI":"10.1109\/TSMC.2013.2291714","article-title":"A Gesture Learning Interface for Simulated Robot Path Shaping With a Human Teacher","volume":"44","author":"Yanik","year":"2014","journal-title":"IEEE Transactions on Human-Machine Systems"},{"key":"2026040113120620700_ref291","article-title":"Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning","volume-title":"Conference on Robot Learning (CoRL)","author":"Yu","year":"2019"},{"key":"2026040113120620700_ref292","doi-asserted-by":"crossref","first-page":"243","DOI":"10.1613\/jair.1.11345","article-title":"Human-in-the-loop artificial intelligence","volume":"64","author":"Zanzotto","year":"2019","journal-title":"Journal of Artificial Intelligence Research"},{"key":"2026040113120620700_ref293","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.1605.06450","article-title":"Query-Efficient Imitation Learning for End-to-End Autonomous Driving","author":"Zhang","year":"2016"},{"key":"2026040113120620700_ref294","doi-asserted-by":"publisher","first-page":"24258","DOI":"10.1109\/ACCESS.2020.2970433","article-title":"Deep Interac tive Reinforcement Learning for Path Following of Autonomous Underwater Vehicle","volume":"8","author":"Zhang","year":"2020","journal-title":"IEEE Access"},{"key":"2026040113120620700_ref295","doi-asserted-by":"crossref","first-page":"36","DOI":"10.1007\/978-3-030-32813-9_5","volume-title":"Benchmarking, Measuring, and Optimizing","author":"Zhang","year":"2019"},{"key":"2026040113120620700_ref296","doi-asserted-by":"crossref","DOI":"10.24963\/ijcai.2019\/884","article-title":"Leveraging human guidance for deep reinforcement learning tasks","author":"Zhang","year":"2019"},{"key":"2026040113120620700_ref297","first-page":"1","article-title":"A review of inverse reinforcement learning theory and recent advances","author":"Zhifei","year":"2012","journal-title":"Evolutionary Computation (CEC), 2012 IEEE Congress on"},{"key":"2026040113120620700_ref298","article-title":"robo suite: A Modular Simulation Framework and Benchmark for Robot Learning","author":"Zhu","year":"2020"}],"container-title":["Foundations and Trends\u00ae in Robotics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.emerald.com\/ftrob\/article-pdf\/10\/1-2\/1\/11046403\/2300000072en.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/www.emerald.com\/ftrob\/article-pdf\/10\/1-2\/1\/11046403\/2300000072en.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T17:12:55Z","timestamp":1775063575000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.emerald.com\/ftrob\/article\/10\/1-2\/1\/1328578\/Interactive-Imitation-Learning-in-Robotics-A"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,11,22]]},"references-count":298,"journal-issue":{"issue":"1-2","published-print":{"date-parts":[[2022,11,22]]}},"URL":"https:\/\/doi.org\/10.1561\/2300000072","relation":{},"ISSN":["1935-8253","1935-8261"],"issn-type":[{"value":"1935-8253","type":"print"},{"value":"1935-8261","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,11,22]]}}}