Relative expected instantaneous loss bounds

25Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In the literature a number of relative loss bounds have been shown for online learning algorithms. Here the relative loss is the total loss of the on-line algorithm in all trials minus the total loss of the best comparator that is chosen off-line. However, for many applications instantaneous loss bounds are more interesting where the learner first sees a batch of examples and then uses these examples to make a prediction on a new instance. We show relative expected instantaneous loss bounds for the case when the examples are i.i.d. with an unknown distribution. We bound the expected loss of the algorithm on the last example minus the expected loss of the best comparator on a random example. In particular, we study linear regression and density estimation problems and show how the leave-one-out loss can be used to prove instantaneous loss bounds for these cases. For linear regression we use an algorithm that is similar to a new on-line learning algorithm developed by Vovk. Recently a large number of relative total loss bounds have been shown that have the form O(ln T), where T is the number of trials/examples. Standard conversions of on-line algorithms to batch algorithms result in relative expected instantaneous loss bounds of the form O(ln T/T). Our methods lead to O(l/T) upper bounds. In many cases we give tight lower bounds. © 2002 Elsevier Science (USA).

Cite

CITATION STYLE

APA

Forster, J., & Warmuth, M. K. (2002). Relative expected instantaneous loss bounds. Journal of Computer and System Sciences, 64(1), 76–102. https://doi.org/10.1006/jcss.2001.1798

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free