Generative Adversarial Networks and Continual Learning

  • Liang K
  • Li C
  • Wang G
  • et al.
N/ACitations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

There is a strong emphasis in the continual learning literature on sequential classification experiments, where each task bares little semblance to previous ones. While certainly a form of continual learning, such tasks do not accurately represent many continual learning problems of the real-world, where the data distribution often evolves slowly over time. We propose using Generative Adversarial Networks (GANs) as a potential source for generating potentially unlimited datasets of this nature. We also identify that the dynamics of GAN training naturally constitute a continual learning problem, and show that leveraging continual learning methods can improve performance. As such, we show that techniques from both continual learning and GAN, typically studied separately, can be used to each other’s benefit.

Cite

CITATION STYLE

APA

Liang, K. J., Li, C., Wang, G., & Carin, L. (2018). Generative Adversarial Networks and Continual Learning. Nips, (Nips 2018), 1–10.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free