Adaptive dynamic programming for model-free tracking of trajectories with time-varying parameters

16Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Recently proposed adaptive dynamic programming (ADP) tracking controllers assume that the reference trajectory follows time-invariant exo-system dynamics—an assumption that does not hold for many applications. In order to overcome this limitation, we propose a new Q-function that explicitly incorporates a parametrized approximation of the reference trajectory. This allows learning to track a general class of trajectories by means of ADP. Once our Q-function has been learned, the associated controller handles time-varying reference trajectories without the need for further training and independent of exo-system dynamics. After proposing this general model-free off-policy tracking method, we provide an analysis of the important special case of linear quadratic tracking. An example demonstrates that our new method successfully learns the optimal tracking controller and outperforms existing approaches in terms of tracking error and cost.

Cite

CITATION STYLE

APA

Köpf, F., Ramsteiner, S., Puccetti, L., Flad, M., & Hohmann, S. (2020). Adaptive dynamic programming for model-free tracking of trajectories with time-varying parameters. International Journal of Adaptive Control and Signal Processing, 34(7), 839–856. https://doi.org/10.1002/acs.3106

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free