This note considers multiagent systems seeking to optimize a convex aggregate function.We assume that the gradient of this function is distributed, meaning that each agent can compute its corresponding partial derivative with information about its neighbors and itself only. In such scenarios, the discrete-time implementation of the gradient descent method poses the basic challenge of determining appropriate agent stepsizes that guarantee the monotonic evolution of the objective function. We provide a distributed algorithmic solution to this problem based on the aggregation of agent stepsizes via adaptive convex combinations. Simulations illustrate our results.
CITATION STYLE
Cortés, J., & Martínez, S. (2015). Distributed line search for multiagent convex optimization. In Lecture Notes in Control and Information Sciences (Vol. 461, pp. 95–110). Springer Verlag. https://doi.org/10.1007/978-3-319-20988-3_6
Mendeley helps you to discover research relevant for your work.