Evidence that incremental Delta-Bar-Delta is an attribute-efficient linear learner

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The Winnow class of on-line linear learning algorithms [10,11] was designed to be attribute-efficient. When learning with many irrelevant attributes, Winnow makes a number of errors that is only logarithmic in the number of total attributes, compared to the Perceptron algorithm, which makes a nearly linear number of errors. This paper presents data that argues that the Incremental Delta-Bar-Delta (IDBD) second-order gradient-descent algorithm [14] is attribute-efficient, performs similarly to Winnow on tasks with many irrelevant attributes, and also does better than Winnow on a task where Winnow does poorly. Preliminary analysis supports this empirical claim by showing that IDBD, like Winnow and other attribute-efficient algorithms, and unlike the Perceptron algorithm, has weights that can grow exponentially quickly. By virtue of its more flexible approach to weight updates, however, IDBD may be a more practically useful learning algorithm than Winnow.

Cite

CITATION STYLE

APA

Harris, H. D. (2002). Evidence that incremental Delta-Bar-Delta is an attribute-efficient linear learner. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2430, pp. 135–147). Springer Verlag. https://doi.org/10.1007/3-540-36755-1_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free