Efficient Incremental Learning Using Dynamic Correction Vector

13Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

One major challenge for modern artificial neural networks (ANNs) is that they typically does not handle incremental learning well. In other words, while learning the new features, the performances of existing features usually deteriorate. This phenomenon is called catastrophic forgetting, which causes great problems for continuous, incremental, and intelligent learning. In this work, we propose a dynamic correction vector based algorithm to address both the bias problem from knowledge distillation and the overfitting problem. Specifically, we have made the following contributions: 1) we have designed a novel dynamic correction vector based algorithm; 2) we have proposed new loss functions accordingly. Experimental results on MNIST and CIFAR-100 datasets demonstrate that our technique can outperform state-of-the-art incremental learning methods by 4% on large datasets.

Cite

CITATION STYLE

APA

Xiang, Y., Miao, Y., Chen, J., & Xuan, Q. (2020). Efficient Incremental Learning Using Dynamic Correction Vector. IEEE Access, 8, 23090–23099. https://doi.org/10.1109/ACCESS.2019.2963461

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free