Continual Learning for Classification Problems: A Survey

6Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Artificial Neural Networks performs a specific task much better than a human but fail at toddler level skills. Because this requires learning new things and transferring them to other contexts. So, the goal of general AI is to make the models continually learning as in humans. Thus, the concept of continual learning is inspired by lifelong learning in humans. However, continual learning is a challenge in the machine learning community since acquiring knowledge from data distributions that are non-stationary in general leads to catastrophic forgetting also known as catastrophic interference. For those state-of-art deep neural networks which learn from stationary data distributions, this would be a drawback. In this survey, we summarize different continual learning strategies used for classification problems which include: Regularization strategies, memory, structure, and Energy-based models.

Cite

CITATION STYLE

APA

Vijayan, M., & Sridhar, S. S. (2021). Continual Learning for Classification Problems: A Survey. In IFIP Advances in Information and Communication Technology (Vol. 611 IFIPAICT, pp. 156–166). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-92600-7_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free