Using Hindsight to Anchor Past Knowledge in Continual Learning

118Citations
Citations of this article
112Readers
Mendeley users who have this article in their library.

Abstract

In continual learning, the learner faces a stream of data whose distribution changes over time. Modern neural networks are known to suffer under this setting, as they quickly forget previously acquired knowledge. To address such catastrophic forgetting, many continual learning methods implement different types of experience replay, re-learning on past data stored in a small buffer known as episodic memory. In this work, we complement experience replay with a new objective that we call “anchoring”, where the learner uses bilevel optimization to update its knowledge on the current task, while keeping intact predictions on some anchor points of past tasks. These anchor points are learned using gradient-based optimization to maximize forgetting, which is approximated by fine-tuning the currently trained model on the episodic memory of past tasks. Experiments on several supervised learning benchmarks for continual learning demonstrate that our approach improves the standard experience replay in terms of both accuracy and forgetting metrics and for various sizes of episodic memory.

Cite

CITATION STYLE

APA

Chaudhry, A., Gordo, A., Dokania, P., Torr, P., & Lopez-Paz, D. (2021). Using Hindsight to Anchor Past Knowledge in Continual Learning. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 8B, pp. 6993–7001). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i8.16861

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free