This Part I serves the purpose of an introduction to Parts III to V of the book, which are mainly concerned with the quadratic cost optimal control problem for distributed parameter systems and systems with time delay, both over a finite and an infinite time interval. For problems over a finite time interval, the main tool used is Dynamic Programming, which leads to a Hamilton-Jacobi equation for the value function. For the class of control problems considered, the Hamilton-Jacobi equation can be explicitly solved via the study of an operator Riccati equation. The study of the operator Riccati equation when control is exercised through the boundary in the case of distributed parameter systems or when delays are present in the control in the case of systems with time delay poses additional technical difficulties. The results of Part II are needed to overcome these difficulties. For problems over an infinite time interval, the concepts of controllability and observability (and the weaker concepts of stabilizability and detectability) play an essential role in the development of the theory.
CITATION STYLE
Control of linear differential systems. (2007). In Systems and Control: Foundations and Applications (pp. 13–45). Birkhauser. https://doi.org/10.1007/978-0-8176-4581-6_2
Mendeley helps you to discover research relevant for your work.