Frequency-aware truncated methods for sparse online learning

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Online supervised learning with L 1-regularization has gained attention recently because it generally requires less computational time and a smaller space of complexity than batch-type learning methods. However, a simple L 1-regularization method used in an online setting has the side effect that rare features tend to be truncated more than necessary. In fact, feature frequency is highly skewed in many applications. We developed a new family of L 1-regularization methods based on the previous updates for loss minimization in linear online learning settings. Our methods can identify and retain low-frequency occurrence but informative features at the same computational cost and convergence rate as previous works. Moreover, we combined our methods with a cumulative penalty model to derive more robust models over noisy data. We applied our methods to several datasets and empirically evaluated the performance of our algorithms. Experimental results showed that our frequency-aware truncated models improved the prediction accuracy. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Oiwa, H., Matsushima, S., & Nakagawa, H. (2011). Frequency-aware truncated methods for sparse online learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6912 LNAI, pp. 533–548). https://doi.org/10.1007/978-3-642-23783-6_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free