Reinforcement Learning is a commonly used technique in robotics, however, traditional algorithms are unable to handle large amounts of data coming from the robot's sensors, require long training times, are unable to re-use learned policies on similar domains, and use discrete actions. This work introduces TS-RRLCA, a two stage method to tackle these problems. In the first stage, low-level data coming from the robot's sensors is transformed into a more natural, relational representation based on rooms, walls, corners, doors and obstacles, significantly reducing the state space. We also use Behavioural Cloning, i.e., traces provided by the user to learn, in few iterations, a relational policy that can be re-used in different environments. In the second stage, we use Locally Weighted Regression to transform the initial policy into a continuous actions policy. We tested our approach with a real service robot on different environments for different navigation and following tasks. Results show how the policies can be used on different domains and perform smoother, faster and shorter paths than the original policies. © 2009 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Zaragoza, J. H., & Morales, E. F. (2009). A two-stage relational reinforcement learning with continuous actions for real service robots. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5845 LNAI, pp. 337–348). https://doi.org/10.1007/978-3-642-05258-3_30
Mendeley helps you to discover research relevant for your work.