Pseudorehearsal Approach for Incremental Learning of Deep Convolutional Neural Networks

6Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep Convolutional Neural Networks, like most connectionist models, suffers from catastrophic forgetting while training for a new, unknown task. One of the simplest solutions to this issue is adding samples of previous data, with the drawback of increasingly having to store training data; or generating patterns that evoke similar responses of the previous task. We propose a model using a Recurrent Neural Network-based image generator in order to provide a Deep Convolutional Network a limited number of samples for new training data. Simulation results shows that our proposal is able to retain previous knowledge whenever some few pseudo-samples of previously recorded patterns are generated. Despite having lower performance than giving the network samples of the real dataset, this model is more biologically plausible and might help to reduce the need of storing previously trained data on bigger-scale classification classification models.

Cite

CITATION STYLE

APA

Mellado, D., Saavedra, C., Chabert, S., & Salas, R. (2017). Pseudorehearsal Approach for Incremental Learning of Deep Convolutional Neural Networks. In Communications in Computer and Information Science (Vol. 720, pp. 118–126). Springer Verlag. https://doi.org/10.1007/978-3-319-71011-2_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free