Binary exponentiated gradient algorithm for learning linear functions

6Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

This paper develops and analyzes a new online algorithm for learning linear functions, called the Binary Exponentiated Gradient algorithm (BEG). BEG imposes an lower and upper bound for all the weights. Using Kivinen and Warmuth's methodology, the BEG algorithm is developed from a binary entropy distance function and the square loss function, and worst-case upper bounds on the square loss are demonstrated for BEG on arbitrary sequences of trials (instance-outcome pairs). BEG's behavior is unusual in that in some situations its worst-case behavior is comparable to the well-known gradient descent algorithms, e.g. Widrow-Hoff, while in others, it is comparable to the newer exponentiated gradient algorithms. An experiment shows when it outperforms both algorithms.

Cite

CITATION STYLE

APA

Bylander, T. (1997). Binary exponentiated gradient algorithm for learning linear functions. In Proceedings of the Annual ACM Conference on Computational Learning Theory (pp. 184–192). ACM. https://doi.org/10.1145/267460.267495

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free