A selective transfer learning method for concept drift adaptation

8Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Concept drift is one of the key challenges that incremental learning needs to deal with. So far, a lot of algorithms have been proposed to cope with it, but it is still difficult to response quickly to the change of concept. In this paper, a novel method named Selective Transfer Incremental Learning (STIL) is proposed to deal with this tough issue. STIL uses a selective transfer strategy based on the well-known chunk-based ensemble algorithm. In this way, STIL can adapt to the new concept of data well through transfer learning, and prevent negative transfer and overfitting that may occur in the transfer learning effectively by an appropriate selective policy. The algorithm was evaluated on 15 synthetic datasets and three real-world datasets, the experiment results show that STIL performs better in almost all of the datasets compared with five other state-of-the-art methods.

Cite

CITATION STYLE

APA

Xie, G., Sun, Y., Lin, M., & Tang, K. (2017). A selective transfer learning method for concept drift adaptation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10262 LNCS, pp. 353–361). Springer Verlag. https://doi.org/10.1007/978-3-319-59081-3_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free