Online variance minimization

33Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We design algorithms for two online variance minimization problems. Specifically, in every trial t our algorithms get a covariance matrix C t and try to select a parameter vector wt such that the total variance over a sequence of trials Σt wtTCtwt is not much larger than the total variance of the best parameter vector u chosen in hindsight. Two parameter spaces are considered - the probability simplex and the unit sphere. The first space is associated with the problem of minimizing risk in stock portfolios and the second space leads to an online calculation of the eigenvector with minimum eigenvalue. For the first parameter space we apply the Exponentiated Gradient algorithm which is motivated with a relative entropy. In the second case the algorithm maintains a mixture of unit vectors which is represented as a density matrix, The motivating divergence for density matrices is the quantum version of the relative entropy and the resulting algorithm is a special case of the Matrix Exponentiated Gradient algorithm. In each case we prove bounds on the additional total variance incurred by the online algorithm over the best offline parameter. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Warmuth, M. K., & Kuzmin, D. (2006). Online variance minimization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4005 LNAI, pp. 514–528). Springer Verlag. https://doi.org/10.1007/11776420_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free