The optimal solution of a non-convex state-dependent LQR problem and its applications

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

This paper studies a Non-convex State-dependent Linear Quadratic Regulator (NSLQR) problem, in which the control penalty weighting matrix R in the performance index is state-dependent. A necessary and sufficient condition for the optimal solution is established with a rigorous proof by Euler-Lagrange Equation. It is found that the optimal solution of the NSLQR problem can be obtained by solving a Pseudo-Differential-Riccati-Equation (PDRE) simultaneously with the closed-loop system equation. A Comparison Theorem for the PDRE is given to facilitate solution methods for the PDRE. A linear time-variant system is employed as an example in simulation to verify the proposed optimal solution. As a non-trivial application, a goal pursuit process in psychology is modeled as a NSLQR problem and two typical goal pursuit behaviors found in human and animals are reproduced using different control weighting R(x) . It is found that these two behaviors save control energy and cause less stress over Conventional Control Behavior typified by the LQR control with a constant control weighting R, in situations where only the goal discrepancy at the terminal time is of concern, such as in Marathon races and target hitting missions. © 2014 Xu et al.

Cite

CITATION STYLE

APA

Xu, X., Zhu, J. J., & Zhang, P. (2014). The optimal solution of a non-convex state-dependent LQR problem and its applications. PLoS ONE, 9(4). https://doi.org/10.1371/journal.pone.0094925

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free