Few sample learning without data storage for lifelong stream mining (student abstract)

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Continuously mining complexity data stream has recently been attracting an increasing amount of attention, due to the rapid growth of real-world vision/signal applications such as self-driving cars and online social media messages. In this paper, we aim to address two significant problems in the lifelong/incremental stream mining scenario: first, how to make the learning algorithms generalize to the unseen classes only from a few labeled samples; second, is it possible to avoid storing instances from previously seen classes to solve the catastrophic forgetting problem? We introduce a novelty stream mining framework to classify the infinite stream of data with different categories that occurred during different times. We apply a few-sample learning strategy to make the model recognize the novel class with limited samples; at the same time, we implement an incremental generative model to maintain old knowledge when learning new coming categories, and also avoid the violation of data privacy and memory restrictions simultaneously. We evaluate our approach in the continual class-incremental setup on the classification tasks and ensure the sufficient model capacity to accommodate for learning the new incoming categories.

Cite

CITATION STYLE

APA

Wang, Z., Wang, Y., Lin, Y., Dong, B., Tao, H., & Khan, L. (2020). Few sample learning without data storage for lifelong stream mining (student abstract). In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 1396–13962). AAAI press. https://doi.org/10.1609/aaai.v34i10.7253

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free