Deterministic Optimal Control

N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. —Albert Einstein (1879–1955), quoted by J.R. Newman in The World of Mathematics m = L/c 2 . —Albert Einstein, the original form of his famous energy-mass relation E = mc 2 , where L is the Lagrangian, sometimes a form of energy and the cost part of the Hamiltonian in deterministic control theory It probably comes as a surprise to many Americans that the Wright brothers, Orville and Wilbur, did not invent flying, but they developed the first free, controlled, and sustained powered flight by man as reviewed in Repperger's historical perspective on their technical challenges [233]. Indeed, control is embedded in many modern appliances working silently in computers, motor vehicles, and other useful appliances. Beyond engineering design there are natural control systems, like the remarkable human brain working with other components of the central nervous system [172]. Basar [21] lists 25 seminal papers on control, and Bernstein [29] reviews control history through feedback control. The state and future directions of control of dynamical systems were summarized in the 1988 Fleming panel report [90] and more recently in the 2003 Murray panel report [91]. This chapter provides summary background as a review to provide a basis for exam-ining the difference between deterministic optimal control and stochastic optimal control, treated in Chapter 6. Summarized with commentary are Hamilton's equations, the max-imum principle, and dynamic programming formulation. A special and useful canonical model, the linear quadratic (LQ) model, is presented. A.1 Hamilton's Equations: Hamiltonian and Lagrange Multiplier Formulation of Deterministic Optimal Control For deterministic control problems [164, 44], many can be cast as systems of ordinary differential equations so there are many standard numerical methods that can be used for the solution. For example, if X(t) is the state n x -vector on the state space X in continuous time t and U(t) is the control n u -vector on the control space U, then the differential equation for the deterministic system dynamics is dX dt (t) = f (X(t), U(t), t), X(t 0) = x 0 . (A.1)

Cite

CITATION STYLE

APA

Deterministic Optimal Control. (2006). In Controlled Markov Processes and Viscosity Solutions (pp. 1–55). Springer-Verlag. https://doi.org/10.1007/0-387-31071-1_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free