An ensemble approach for incremental learning in nonstationary environments

43Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We describe an ensemble of classifiers based algorithm for incremental learning in nonstationary environments. In this formulation, we assume that the learner is presented with a series of training datasets, each of which is drawn from a different snapshot of a distribution that is drifting at an unknown rate. Furthermore, we assume that the algorithm must learn the new environment in an incremental manner, that is, without having access to previously available data. Instead of a time window over incoming instances, or an aged based forgetting - as used by most ensemble based nonstationary learning algorithms - a strategic weighting mechanism is employed that tracks the classifiers' performances over drifting environments to determine appropriate voting weights. Specifically, the proposed approach generates a single classifier for each dataset that becomes available, and then combines them through a dynamically modified weighted majority voting, where the voting weights themselves are computed as weighted averages of classifiers' individual performances over all environments. We describe the implementation details of this approach, as well as its initial results on simulated non-stationary environments. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Muhlbaier, M. D., & Polikar, R. (2007). An ensemble approach for incremental learning in nonstationary environments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4472 LNCS, pp. 490–500). Springer Verlag. https://doi.org/10.1007/978-3-540-72523-7_49

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free