On relative loss bounds in generalized linear regression

26Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

When relative loss bounds are considered, an on-line learning algorithm is compared to the performance of a class of off-line algorithms, called experts. In this paper we reconsider a result by Vovk, namely an upper bound on the on-line relative loss for linear regression with square loss - here the experts are linear functions. We give a shorter and simpler proof of Vovk's result and give a new motivation for the choice of the predictions of Vovk's learning algorithm. This is done by calculating the, in some sense, best prediction for the last trial of a sequence of trials when it is known that the outcome variable is bounded. We try to generalize these ideas to the case of generalized linear regression where the experts are neurons and give a formula for the “best” prediction for the last trial in this case, too. This prediction turns out to be essentially an integral over the “best” expert applied to the last instance. Predictions that are “optimal” in this sense might be good predictions for long sequences of trials as well.

Cite

CITATION STYLE

APA

Forster, J. (1999). On relative loss bounds in generalized linear regression. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1684, pp. 269–280). Springer Verlag. https://doi.org/10.1007/3-540-48321-7_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free