Learning replanning policies with direct policy search

2Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Direct policy search has been successful in learning challenging real-world robotic motor skills by learning open-loop movement primitives with high sample efficiency. These primitives can be generalized to different contexts with varying initial configurations and goals. Current state-of-the-art contextual policy search algorithms can however not adapt to changing, noisy context measurements. Yet, these are common characteristics of real-world robotic tasks. Planning a trajectory ahead based on an inaccurate context that may change during the motion often results in poor accuracy, especially with highly dynamical tasks. To adapt to updated contexts, it is sensible to learn trajectory replanning strategies. We propose a framework to learn trajectory replanning policies via contextual policy search and demonstrate that they are safe for the robot, can be learned efficiently, and outperform non-replanning policies for problems with partially observable or perturbed context.

Cite

CITATION STYLE

APA

Brandherm, F., Peters, J., Neumann, G., & Akrour, R. (2019). Learning replanning policies with direct policy search. IEEE Robotics and Automation Letters, 4(2), 2196–2203. https://doi.org/10.1109/LRA.2019.2901656

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free