Sparse kernel learning and the relevance units machine

6Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The relevance vector machine(RVM) is a state-of-the-art constructing sparse regression kernel model [1,2,3,4]. It not only generates a much sparser model but provides better generalization performance than the standard support vector machine (SVM). In RVM and SVM, relevance vectors (RVs) and support vectors (SVs) are both selected from the input vector set. This may limit model flexibility. In this paper we propose a new sparse kernel model called Relevance Units Machine (RUM). RUM follows the idea of RVM under the Bayesian framework but releases the constraint that RVs have to be selected from the input vectors. RUM treats relevance units as part of the parameters of the model. As a result, a RUM maintains all the advantages of RVM and offers superior sparsity. The new algorithm is demonstrated to possess considerable computational advantages over well-known thestate-of-the-art algorithms. © Springer-Verlag Berlin Heidelberg 2009.

Cite

CITATION STYLE

APA

Gao, J., & Jun, Z. (2009). Sparse kernel learning and the relevance units machine. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5476 LNAI, pp. 612–619). https://doi.org/10.1007/978-3-642-01307-2_60

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free