Learning to control a low-cost manipulator using data-efficient reinforcement learning

70Citations
Citations of this article
329Readers
Mendeley users who have this article in their library.

Abstract

Over the last years, there has been substantial progress in robust manipulation in unstructured environments. The long-term goal of our work is to get away from precise, but very expensive robotic systems and to develop affordable, potentially imprecise, self-adaptive manipulator systems that can interactively perform tasks such as playing with children. In this paper, we demonstrate how a low-cost off-the-shelf robotic system can learn closed-loop policies for a stacking task in only a handful of trials-from scratch. Our manipulator is inaccurate and provides no pose feedback. For learning a controller in the work space of a Kinect-style depth camera, we use a model-based reinforcement learning technique. Our learning method is data efficient, reduces model bias, and deals with several noise sources in a principled way during long-term planning. We present a way of incorporating state-space constraints into the learning process and analyze the learning gain by exploiting the sequential structure of the stacking task.

Cite

CITATION STYLE

APA

Deisenroth, M. P., Rasmussen, C. E., & Fox, D. (2012). Learning to control a low-cost manipulator using data-efficient reinforcement learning. In Robotics: Science and Systems (Vol. 7, pp. 57–64). MIT Press Journals. https://doi.org/10.15607/rss.2011.vii.008

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free