Counter-Contrastive Learning for Language GANs

4Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Generative Adversarial Networks (GANs) have achieved great success in image synthesis, but have proven to be difficult to generate natural language. Challenges arise from the uninformative learning signals passed from the discriminator. In other words, the poor learning signals limit the learning capacity for generating languages with rich structures and semantics. In this paper, we propose to adopt the counter-contrastive learning (CCL) method to support the generator's training in language GANs. In contrast to standard GANs that adopt a simple binary classifier to discriminate whether a sample is real or fake, we employ a counter-contrastive learning signal that advances the training of language synthesizers by (1) pulling the language representations of generated and real samples together and (2) pushing apart representations of real samples to compete with the discriminator and thus prevent the discriminator from being overtrained. We evaluate our method on both synthetic and real benchmarks and yield competitive performance compared to previous GANs for adversarial sequence generation.

Cite

CITATION STYLE

APA

Chai, Y., Zhang, H., Yin, Q., & Zhang, J. (2021). Counter-Contrastive Learning for Language GANs. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 4834–4839). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.415

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free