Overcoming Catastrophic Interference with Bayesian Learning and Stochastic Langevin Dynamics

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural networks encounter serious catastrophic forgetting when information is learned sequentially. Although simply replaying all previous data alleviates the problem, it may require large memory to store all previous training examples. Even with enough memory, joint training can be infeasible if access to past data is limited. We developed generative methods for preventing catastrophic forgetting that do not require the presence of previously used data. Developed methods are based on activation maximization of output neurons and on sampling of posterior probability of data distribution. The methods can work for regular feedforward networks. The proof of concept experiments were performed on publicly available datasets.

Cite

CITATION STYLE

APA

Leontev, M., Mikheev, A., Sviatov, K., & Sukhov, S. (2019). Overcoming Catastrophic Interference with Bayesian Learning and Stochastic Langevin Dynamics. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11554 LNCS, pp. 370–378). Springer Verlag. https://doi.org/10.1007/978-3-030-22796-8_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free