Approximate gradient/penalty methods with general discretization schemes for optimal control problems

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider an optimal control problem described by ordinary differential equations, with control and state constraints. The state equation is first discretized by a general explicit Runge-Kutta scheme and the controls are approximated by piecewise polynomial functions. We then propose approximate gradient and gradient projection methods, and their penalized versions, that construct sequences of discrete controls and progressively refine the discretization during the iterations. Instead of using the exact discrete cost derivative, which usually requires tedious calculations of composite functions, we use here an approximate derivative of the cost defined by discretizing the continuous adjoint equation by the same, but nonmatching, Runge-Kutta scheme backward and the integral involved by a Newton-Cotes integration rule. We show that strong accumulation points in L2 of sequences constructed by these methods satisfy the weak necessary conditions for optimality for the continuous problem. Finally, numerical examples are given. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Chryssoverghi, I. (2006). Approximate gradient/penalty methods with general discretization schemes for optimal control problems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3743 LNCS, pp. 199–207). https://doi.org/10.1007/11666806_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free