Neural modeling of hose dynamics to speedup reinforcement learning experiments

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Two main practical problems arise when dealing with autonomous learning of the control of Linked Multi-Component Robotic Systems (L-MCRS) with Reinforcement Learning (RL): time and space consumption, due to the convergence conditions of the RL algorithm applied, i.e. Q-Learning algorithm, and the complexity of the system model. Model approximate response allows to speedup the realization of RL experiments. We have used a multivariate regression approximation model based on Artificial Neural Networks (ANN), which has achieved a 90% and 27% of time and space savings compared to the conventional Geometrically Exact Dynamic Splines (GEDS) model.

Cite

CITATION STYLE

APA

Lopez-Guede, J. M., & Graña, M. (2015). Neural modeling of hose dynamics to speedup reinforcement learning experiments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9108, pp. 311–319). Springer Verlag. https://doi.org/10.1007/978-3-319-18833-1_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free