Distributed Newton's Method for Network Cost Minimization

18Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

In this article, we examine a novel generic network cost minimization problem, in which every node has a local decision vector to optimize. Each node incurs a cost associated with its decision vector, while each link incurs a cost related to the decision vectors of its two end nodes. All nodes collaborate to minimize the overall network cost. The formulated network cost minimization problem has broad applications in distributed signal processing and control, in which the notion of link costs often arises. To solve this problem in a decentralized manner, we develop a distributed variant of Newton's method, which possesses faster convergence than alternative first-order optimization methods such as gradient descent and alternating direction method of multipliers. The proposed method is based on an appropriate splitting of the Hessian matrix and an approximation of its inverse, which is used to determine the Newton step. Global linear convergence of the proposed algorithm is established under several standard technical assumptions on the local cost functions. Furthermore, analogous to classical centralized Newton's method, a quadratic convergence phase of the algorithm over a certain time interval is identified. Finally, numerical simulations are conducted to validate the effectiveness of the proposed algorithm and its superiority over other first-order methods, especially when the cost functions are ill-conditioned. Complexity issues of the proposed distributed Newton's method and alternative first-order methods are also discussed.

References Powered by Scopus

Distributed optimization and statistical learning via the alternating direction method of multipliers

15934Citations
N/AReaders
Get full text

Distributed subgradient methods for multi-agent optimization

2977Citations
N/AReaders
Get full text

A tutorial on decomposition methods for network utility maximization

1383Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Gradient and Channel Aware Dynamic Scheduling for Over-the-Air Computation in Federated Edge Learning Systems

96Citations
N/AReaders
Get full text

Communication-Efficient Distributed Learning: An Overview

56Citations
N/AReaders
Get full text

Distributed Aggregative Optimization Over Multi-Agent Networks

53Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Cao, X., & Ray Liu, K. J. (2021). Distributed Newton’s Method for Network Cost Minimization. IEEE Transactions on Automatic Control, 66(3), 1278–1285. https://doi.org/10.1109/TAC.2020.2989266

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 2

67%

Researcher 1

33%

Readers' Discipline

Tooltip

Engineering 2

50%

Physics and Astronomy 1

25%

Materials Science 1

25%

Save time finding and organizing research with Mendeley

Sign up for free