Robotic target tracking with approximation space-based feedback during reinforcement learning

6Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a method of target tracking for a robotic vision system employing reinforcement learning with feedback based on average rough coverage performance values. The application is for a linecrawling inspection robot (ALiCE II, the second revision of Automated Line Crawling Equipment) designed to automate the inspection of hydro electric transmission lines and related equipment. The problem considered in this paper is how to train the vision system to track targets of interest and acquire useful images for further analysis. To train the system, two versions of Watkins' Q-learning were implemented, the classical single-step version and a modified strain using an approximation spacebased form of what we term rough feedback. The robot is briefly described along with experimental results for the two forms of the Q-learning control algorithm. The contribution of this article is an introduction to a modified version of Q-learning control with rough feedback to monitor and adjust the learning rate during target tracking. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Lockery, D., & Peters, J. F. (2007). Robotic target tracking with approximation space-based feedback during reinforcement learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4482 LNAI, pp. 483–490). Springer Verlag. https://doi.org/10.1007/978-3-540-72530-5_58

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free