Relative loss bounds and polynomial-time predictions for the K-LMS-NET algorithm

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider a two-layer network algorithm. The first layer consists of an uncountable number of linear units. Each linear unit is an LMS algorithm whose inputs are first "kernelized." Each unit is indexed by the value of a parameter corresponding to a parameterized reproducing kernel. The first-layer outputs are then connected to an exponential weights algorithm which combines them to produce the final output. We give loss bounds for this algorithm; and for specific applications to prediction relative to the best convex combination of kernels, and the best width of a Gaussian kernel. The algorithm's predictions require the computation of an expectation which is a quotient of integrals as seen in a variety of Bayesian inference problems. Typically this computational problem is tackled by MCMC, importance sampling, and other sampling techniques for which there are few polynomial time guarantees of the quality of the approximation in general and none for our problem specifically. We develop a novel deterministic polynomial time approximation scheme for the computations of expectations considered in this paper. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

Herbster, M. (2004). Relative loss bounds and polynomial-time predictions for the K-LMS-NET algorithm. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3244, pp. 309–323). Springer Verlag. https://doi.org/10.1007/978-3-540-30215-5_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free