Learning sparse confidence-weighted classifier on very high dimensional data

4Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Confidence-weighted (CW) learning is a successful online learning paradigm which maintains a Gaussian distribution over classifier weights and adopts a covariance matrix to represent the uncertainties of the weight vectors. However, there are two deficiencies in existing full CW learning paradigms, these being the sensitivity to irrelevant features, and the poor scalability to high dimensional data due to the maintenance of the covariance structure. In this paper, we begin by presenting an online-batch CW learning scheme, and then present a novel paradigm to learn sparse CW classifiers. The proposed paradigm essentially identifies feature groups and naturally builds a block diagonal covariance structure, making it very suitable for CW learning over very high-dimensional data. Extensive experimental results demonstrate the superior performance of the proposed methods over state-of-the-art counterparts on classification and feature selection tasks.

Cite

CITATION STYLE

APA

Tan, M., Yan, Y., Wang, L., Van Den Hengel, A., Tsang, I. W., & Shi, Q. (2016). Learning sparse confidence-weighted classifier on very high dimensional data. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016 (pp. 2080–2086). AAAI press. https://doi.org/10.1609/aaai.v30i1.10281

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free