Learning actions through imitation and exploration: Towards humanoid robots that learn from humans

20Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A prerequisite for achieving brain-like intelligence is the ability to rapidly learn new behaviors and actions. A fundamental mechanism for rapid learning in humans is imitation: children routinely learn new skills (e.g., opening a door or tying a shoe lace) by imitating their parents; adults continue to learn by imitating skilled instructors (e.g., in tennis). In this chapter, we propose a probabilistic framework for imitation learning in robots that is inspired by how humans learn from imitation and exploration. Rather than relying on complex (and often brittle) physics- based models, the robot learns a dynamic Bayesian network that captures its dynamics directly in terms of sensor measurements and actions during an imitation-guided exploration phase. After learning, actions are selected based on probabilistic inference in the learned Bayesian network. We present results demonstrating that a 25-degree-of-freedom humanoid robot can learn dynamically stable, full-body imitative motions simply by observing a human demonstrator. © Springer-Verlag Berlin Heidelberg 2009.

Cite

CITATION STYLE

APA

Grimes, D. B., & Rao, R. P. N. (2009). Learning actions through imitation and exploration: Towards humanoid robots that learn from humans. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5436, pp. 103–138). https://doi.org/10.1007/978-3-642-00616-6_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free