Securing Distributed Gradient Descent in High Dimensional Statistical Learning

  • Su L
  • Xu J
N/ACitations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

We consider unreliable distributed learning systems wherein the training data is kept confidential by external workers, and the learner has to interact closely with those workers to train a model. In particular, we assume that there exists a system adversary that can adaptively compromise some workers; the compromised workers deviate from their local designed specifications by sending out arbitrarily malicious messages. We assume in each communication round, up to q out of the m workers suffer Byzantine faults. Each worker keeps a local sample of size n and the total sample size is N=nm. We propose a secured variant of the gradient descent method that can tolerate up to a constant fraction of Byzantine workers, i.e., q/m = O(1). Moreover, we show the statistical estimation error of the iterates converges in O(log N) rounds to O(√/N + √/N ), where d is the model dimension. As long as q=O(d), our proposed algorithm achieves the optimal error rate O(√/N $. Our results are obtained under some technical assumptions. Specifically, we assume strongly-convex population risk. Nevertheless, the empirical risk (sample version) is allowed to be non-convex. The core of our method is to robustly aggregate the gradients computed by the workers based on the filtering procedure proposed by Steinhardt et al. On the technical front, deviating from the existing literature on robustly estimating a finite-dimensional mean vector, we establish a uniform concentration of the sample covariance matrix of gradients, and show that the aggregated gradient, as a function of model parameter, converges uniformly to the true gradient function. To get a near-optimal uniform concentration bound, we develop a new matrix concentration inequality, which might be of independent interest.

Cite

CITATION STYLE

APA

Su, L., & Xu, J. (2019). Securing Distributed Gradient Descent in High Dimensional Statistical Learning. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 3(1), 1–41. https://doi.org/10.1145/3322205.3311083

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free