Infinite Horizon Optimal Control

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this chapter we give an introduction to nonlinear infinite horizon optimal control. The dynamic programming principle as well as several consequences of this principle are proved. One of the main results of this chapter is that the infinite horizon optimal feedback law asymptotically stabilizes the system and that the infinite horizon optimal value function is a Lyapunov function for the closed-loop system. Motivated by this property we formulate a relaxed version of the dynamic programming principle, which allows to prove stability and suboptimality results for nonoptimal feedback laws and without using the optimal value function. A practical version of this principle is provided, too. These results will be central in the following chapters for the stability and performance analysis of NMPC algorithms. For the special case of sampled data systems we finally show that for suitable integral costs asymptotic stability of the continuous time sampled data closed-loop system follows from the asymptotic stability of the associated discrete time system.

Cite

CITATION STYLE

APA

Grüne, L., & Pannek, J. (2017). Infinite Horizon Optimal Control. In Communications and Control Engineering (pp. 71–90). Springer International Publishing. https://doi.org/10.1007/978-3-319-46024-6_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free