Generalized Accelerated Gradient Methods for Distributed MPC Based on Dual Decomposition

8Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider distributed model predictive control (DMPC) where a sparse centralized optimization problem without a terminal cost or a terminal constraint set is solved in distributed fashion. Distribution of the optimization algorithm is enabled by dual decomposition. Gradient methods are usually used to solve the dual problem resulting from dual decomposition. However, gradient methods are known for their slow convergence rate, especially for ill-conditioned problems. This is not desirable in DMPC where the amount of communication should be kept as low as possible. In this chapter, we present a distributed optimization algorithm applied to solve optimization problems arising in DMPC that has significantly better convergence rate than the classical gradient method. This improved convergence rate is achieved by using accelerated gradient methods instead of standard gradient methods and by in a well-defined manner, incorporating Hessian information into the gradient-iterations. We also present a stopping condition to the distributed optimization algorithm that ensures feasibility, stability and closed loop performance of the DMPC-scheme, without using a stabilizing terminal cost or terminal constraint set. © Springer Science+Business Media Dordrecht 2014.

Cite

CITATION STYLE

APA

Giselsson, P., & Rantzer, A. (2014). Generalized Accelerated Gradient Methods for Distributed MPC Based on Dual Decomposition. Intelligent Systems, Control and Automation: Science and Engineering, 69, 309–325. https://doi.org/10.1007/978-94-007-7006-5_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free