This paper investigates the use of reinforcement learning in solving the path-tracking problem for car-like robots. The reinforcement learner uses a case-based function approximator, to extend the standard reinforcement learning paradigm to handle continuous states. The learned controller performs comparable to the best traditional control functions in both simulation and also in practical driving.
CITATION STYLE
Baltes, J., & Lin, Y. (2000). Path tracking control of non-holonomic car-like robot with reinforcement learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1856, pp. 162–173). Springer Verlag. https://doi.org/10.1007/3-540-45327-x_12
Mendeley helps you to discover research relevant for your work.