{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T02:43:42Z","timestamp":1773801822434,"version":"3.50.1"},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"13","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"<jats:p>Distilling knowledge from human demonstrations is a promising way for robots to learn and act. Existing methods, which often rely on coarsely-aligned video pairs, are typically constrained to learning global or task-level features. As a result, they tend to neglect the fine-grained frame-level dynamics required for complex manipulation and generalization to novel tasks. We posit that this limitation stems from a vicious circle of inadequate datasets and the methods they inspire. To break this cycle, we propose a paradigm shift that treats fine-grained human-robot alignment as a conditional video generation problem. To this end, we first introduce H&amp;R, a novel third-person dataset containing 2,600 episodes of precisely synchronized human and robot motions, collected using a VR teleoperation system. We then present Human2Robot, a framework designed to leverage this data. Human2Robot employs a Video Prediction Model to learn a rich and implicit representation of robot dynamics by generating robot videos from human input, which in turn guides a decoupled action decoder. Our real-world experiments demonstrate that this approach not only achieves high performance on seen tasks but also exhibits significant one-shot generalization to novel positions, objects, instances, and even new task categories.<\/jats:p>","DOI":"10.1609\/aaai.v40i13.38086","type":"journal-article","created":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T00:01:37Z","timestamp":1773792097000},"page":"11078-11086","source":"Crossref","is-referenced-by-count":0,"title":["Human2Robot: Learning Robot Actions from Paired Human-Robot Videos"],"prefix":"10.1609","volume":"40","author":[{"given":"Sicheng","family":"Xie","sequence":"first","affiliation":[]},{"given":"Haidong","family":"Cao","sequence":"additional","affiliation":[]},{"given":"Zejia","family":"Weng","sequence":"additional","affiliation":[]},{"given":"Zhen","family":"Xing","sequence":"additional","affiliation":[]},{"given":"Haoran","family":"Chen","sequence":"additional","affiliation":[]},{"given":"Shiwei","family":"Shen","sequence":"additional","affiliation":[]},{"given":"Jiaqi","family":"Leng","sequence":"additional","affiliation":[]},{"given":"Zuxuan","family":"Wu","sequence":"additional","affiliation":[]},{"given":"Yu-Gang","family":"Jiang","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2026,3,14]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/38086\/42048","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/38086\/42048","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T00:01:37Z","timestamp":1773792097000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/38086"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,14]]},"references-count":0,"journal-issue":{"issue":"13","published-online":{"date-parts":[[2026,3,17]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v40i13.38086","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2026,3,14]]}}}