A distributed learning algorithm based on frontier vector quantization and information theory

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a novel distributed learning algorithm built upon the Frontier Vector Quantization based on Information Theory (FVQIT) method. The FVQIT is very effective in classification problems but it shows poor training time performance. Thus, distributed learning is appropriate here to speed up training. One of the most promising lines of research towards learning from distributed data sets is separated learning and model integration. Separated learning avoids moving raw data around the distributed nodes. The integration of local models is implemented in this research using a genetic algorithm. The results obtained from twelve classification data sets demonstrate the efficacy of the proposed method. In average, the distributed FVQIT performs 13.56 times faster than the FVQIT and improves classification accuracy by 5.25%. © 2013 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Peteiro-Barral, D., & Guijarro-Berdiñas, B. (2013). A distributed learning algorithm based on frontier vector quantization and information theory. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8131 LNCS, pp. 122–129). https://doi.org/10.1007/978-3-642-40728-4_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free