On-line maximum likelihood prediction with respect to general loss functions

1Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper introduces a new family of deterministic and stochastic on-line prediction algorithms which perform well with respect to general loss functions and analyzes their behavior in terms of expected loss bounds. The algorithms use parametric probabilistic models regardless of the kind of loss function used. The key of the algorithms is to iteratively estimate the probabilistic model using the maximum likelihood method, and then to construct the optimal prediction function which minimizes the average of the loss taken with respect to the estimated probabilistic model. A future outcome is predicted using this optimal prediction function. We analyze the algorithms for the cases where the target distribution is 1) k-dimensional parametric and k is known, 2) k-dimensional parametric but k is unknown, and 3) non-parametric. For all the cases, we derive upper bounds on the expected instantaneous or cumulative losses for the algorithms with respect to a large family of loss functions satisfying the constraint introduced by Merhav and Feder. These loss bounds show new universal relations among the expected prediction accuracy, the indexes of the loss function, the complexity of the target rule, and the number of training examples.

Cite

CITATION STYLE

APA

Yamanishi, K. (1995). On-line maximum likelihood prediction with respect to general loss functions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 904, pp. 84–98). Springer Verlag. https://doi.org/10.1007/3-540-59119-2_170

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free