The support vector decomposition machine

13Citations
Citations of this article
80Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In machine learning problems with tens of thousands of features and only dozens or hundreds of independent training examples, dimensionality reduction is essential for good learning performance. In previous work, many researchers have treated the learning problem in two separate phases: first use an algorithm such as singular value decomposition to reduce the dimensionality of the data set, and then use a classification algorithm such as naïve Bayes or support vector machines to learn a classifier. We demonstrate that it is possible to combine the two goals of dimensionality reduction and classification into a single learning objective, and present a novel and efficient algorithm which optimizes this objective directly. We present experimental results in fMRI analysis which show that we can achieve better learning performance and lower-dimensional representations than two-phase approaches can.

Cite

CITATION STYLE

APA

Pereira, F., & Gordon, G. (2006). The support vector decomposition machine. In ACM International Conference Proceeding Series (Vol. 148, pp. 689–696). https://doi.org/10.1145/1143844.1143931

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free