Distributed stochastic gradient descent with event-triggered communication

24Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

We develop a Distributed Event-Triggered Stochastic GRAdient Descent (DETSGRAD) algorithm for solving non-convex optimization problems typically encountered in distributed deep learning. We propose a novel communication triggering mechanism that would allow the networked agents to update their model parameters aperiodically and provide sufficient conditions on the algorithm step-sizes that guarantee the asymptotic mean-square convergence. The algorithm is applied to a distributed supervised-learning problem, in which a set of networked agents collaboratively train their individual neural networks to perform image classification, while aperiodically sharing the model parameters with their one-hop neighbors. Results indicate that all agents report similar performance that is also comparable to the performance of a centrally trained neural network, while the event-triggered communication provides significant reduction in inter-agent communication. Results also show that the proposed algorithm allows the individual agents to classify the images even though the training data corresponding to all the classes are not locally available to each agent.

Cite

CITATION STYLE

APA

George, J., & Gurram, P. (2020). Distributed stochastic gradient descent with event-triggered communication. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 7169–7178). AAAI press. https://doi.org/10.1609/aaai.v34i05.6206

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free