Batch-incremental versus instance-incremental learning in dynamic and evolving data

82Citations
Citations of this article
123Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many real world problems involve the challenging context of data streams, where classifiers must be incremental: able to learn from a theoretically- infinite stream of examples using limited time and memory, while being able to predict at any point. Two approaches dominate the literature: batch-incremental methods that gather examples in batches to train models; and instance-incremental methods that learn from each example as it arrives. Typically, papers in the literature choose one of these approaches, but provide insufficient evidence or references to justify their choice. We provide a first in-depth analysis comparing both approaches, including how they adapt to concept drift, and an extensive empirical study to compare several different versions of each approach. Our results reveal the respective advantages and disadvantages of the methods, which we discuss in detail. © Springer-Verlag Berlin Heidelberg 2012.

Cite

CITATION STYLE

APA

Read, J., Bifet, A., Pfahringer, B., & Holmes, G. (2012). Batch-incremental versus instance-incremental learning in dynamic and evolving data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7619 LNCS, pp. 313–323). https://doi.org/10.1007/978-3-642-34156-4_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free