Most work in navigation approaches for mobile robots does not take into account existing solutions to similar problems when learning a policy to solve a new problem, and consequently solves the current navigation problem from scratch. In this article we investigate a knowledge transfer technique that enables the use of a previously know policy from one or more related source tasks in a new task. Here we represent the knowledge learned as a stochastic abstract policy, which can be induced from a training set given by a set of navigation examples of state-action sequences executed successfully by a robot to achieve a specific goal in a given environment. We propose both a probabilistic and a nondeterministic abstract policy, in order to preserve the occurrence of all actions identified in the inductive process. Experiments carried out attest to the effectiveness and efficiency of our proposal. © 2011 Springer-Verlag.
CITATION STYLE
Matos, T., Bergamo, Y. P., Da Silva, V. F., & Costa, A. H. R. (2011). Stochastic abstract policies for knowledge transfer in robotic navigation tasks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7094 LNAI, pp. 454–465). https://doi.org/10.1007/978-3-642-25324-9_39
Mendeley helps you to discover research relevant for your work.