Improving confidence of dual averaging stochastic online learning via aggregation

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Stochastic online learning algorithms typically exhibit slow convergence speed, but their solutions of moderate accuracy often suffice in practice. Since the outcomes of these algorithms are random variables, not only their accuracy but also their probability of achieving a certain accuracy, called confidence, is important. We show that a rather simple aggregation of outcomes from parallel dual averaging runs can provide a solution with improved confidence, and it can be controlled by the number of runs, independently of the length of learning processes. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Lee, S. (2012). Improving confidence of dual averaging stochastic online learning via aggregation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7526 LNAI, pp. 229–232). https://doi.org/10.1007/978-3-642-33347-7_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free