One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning

79Citations
Citations of this article
541Readers
Mendeley users who have this article in their library.

Abstract

Humans and animals are capable of learning a new behavior by observing others perform the skill just once. We consider the problem of allowing a robot to do the same – learning from a video of a human, even when there is domain shift in the perspective, environment, and embodiment between the robot and the observed human. Prior approaches to this problem have hand-specified how human and robot actions correspond and often relied on explicit human pose detection systems. In this work, we present an approach for one-shot learning from a video of a human by using human and robot demonstration data from a variety of previous tasks to build up prior knowledge through meta-learning. Then, combining this prior knowledge and only a single video demonstration from a human, the robot can perform the task that the human demonstrated. We show experiments on both a PR2 arm and a Sawyer arm, demonstrating that after meta-learning, the robot can learn to place, push, and pick-and-place new objects using just one video of a human performing the manipulation.

Cite

CITATION STYLE

APA

Yu, T., Finn, C., Xie, A., Dasari, S., Zhang, T., Abbeel, P., & Levine, S. (2018). One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning. In Robotics: Science and Systems. MIT Press Journals. https://doi.org/10.15607/RSS.2018.XIV.002

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free