Abstract
This paper develops a distributed optimization strategy with guaranteed exact convergence for a broad class of left-stochastic combination policies. The resulting exact diffusion strategy is shown in Part II of this paper to have a wider stability range and superior convergence performance than the EXTRA strategy. The exact diffusion method is applicable to locally balanced left-stochastic combination matrices which, compared to the conventional doubly stochastic matrix, are more general and able to endow the algorithm with faster convergence rates, more flexible step-size choices, and improved privacy-preserving properties. The derivation of the exact diffusion strategy relies on reformulating the aggregate optimization problem as a penalized problem and resorting to a diagonally weighted incremental construction. Detailed stability and convergence analyses are pursued in Part II of this paper and are facilitated by examining the evolution of the error dynamics in a transformed domain. Numerical simulations illustrate the theoretical conclusions.
Author supplied keywords
Cite
CITATION STYLE
Yuan, K., Ying, B., Zhao, X., & Sayed, A. H. (2019). Exact diffusion for distributed optimization and learning - Part I: Algorithm development. IEEE Transactions on Signal Processing, 67(3), 708–723. https://doi.org/10.1109/TSP.2018.2875898
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.