Efficient protocols for distributed classification and optimization

23Citations
Citations of this article
65Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A recent paper [1] proposes a general model for distributed learning that bounds the communication required for learning classifiers with ε error on linearly separable data adversarially distributed across nodes. In this work, we develop key improvements and extensions to this basic model. Our first result is a two-party multiplicative-weight-update based protocol that uses O(d 2 log1/ε) words of communication to classify distributed data in arbitrary dimension d, ε-optimally. This extends to classification over k nodes with O(kd 2 log1/ε) words of communication. Our proposed protocol is simple to implement and is considerably more efficient than baselines compared, as demonstrated by our empirical results. In addition, we show how to solve fixed-dimensional and high-dimensional linear programming with small communication in a distributed setting where constraints may be distributed across nodes. Our techniques make use of a novel connection from multipass streaming, as well as adapting the multiplicative- weight-update framework more generally to a distributed setting. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Daumé, H., Phillips, J. M., Saha, A., & Venkatasubramanian, S. (2012). Efficient protocols for distributed classification and optimization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7568 LNAI, pp. 154–168). https://doi.org/10.1007/978-3-642-34106-9_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free