Lifelong Robot Learning

  • Thrun S
  • Mitchell T
N/ACitations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Learning provides a useful tool for the automatic design of autonomous robots. Recent research on learning robot control has predominantly focussed on learning single tasks that were studied in isolation. If robots encounter a multitude of control learning tasks over their entire lifetime, however, there is an opportunity to transfer knowledge between them. In order to do so, robots may learn the invariants of the individual tasks and environments. This task-independent knowledge can be employed to bias generalization when learning control, which reduces the need for real-world experimentation. We argue that knowledge transfer is essential if robots are to learn control with moderate learning times in complex scenarios. Two approaches to lifelong robot learning which both capture invariant knowledge about the robot and its environments are presented. Both approaches have been evaluated using a HERO-2000 mobile robot. Learning tasks included navigation in unknown indoor environments and a simple find-and-fetch task.

Cite

CITATION STYLE

APA

Thrun, S., & Mitchell, T. M. (1995). Lifelong Robot Learning. In The Biology and Technology of Intelligent Autonomous Agents (pp. 165–196). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-79629-6_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free