Marginal Replay vs Conditional Replay for Continual Learning

14Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a new replay-based method of continual classification learning that we term “conditional replay” which generates samples and labels together by sampling from a distribution conditioned on the class. We compare conditional replay to another replay-based continual learning paradigm (which we term “marginal replay”) that generates samples independently of their class and assigns labels in a separate step. The main improvement in conditional replay is that labels for generated samples need not be inferred, which reduces the margin for error in complex continual classification learning tasks. We demonstrate the effectiveness of this approach using novel and standard benchmarks constructed from MNIST and FashionMNIST data, and compare to the regularization-based elastic weight consolidation (EWC) method [17, 34].

Cite

CITATION STYLE

APA

Lesort, T., Gepperth, A., Stoian, A., & Filliat, D. (2019). Marginal Replay vs Conditional Replay for Continual Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11728 LNCS, pp. 466–480). Springer Verlag. https://doi.org/10.1007/978-3-030-30484-3_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free