Sparse prototype representation by core sets

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Due to the increasing amount of large data sets, efficient learning algorithms are necessary. Also the interpretation of the final model is desirable to draw efficient conclusions from the model results. Prototype based learning algorithms have been extended recently to proximity learners to analyze data given in non-standard data formats. The supervised methods of this type are of special interest but suffer from a large number of optimization parameters to model the prototypes. In this contribution we derive an efficient core set based preprocessing to restrict the number of model parameters to O(n/ε2) with n as the number of prototypes. Accordingly, the number of model parameters gets independent of the size of the data sets but scales with the requested precision ε of the core sets. Experimental results show that our approach does not significantly degrade the performance while significantly reducing the memory complexity. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Schleif, F. M., Zhu, X., & Hammer, B. (2013). Sparse prototype representation by core sets. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8206 LNCS, pp. 302–309). https://doi.org/10.1007/978-3-642-41278-3_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free