Disentangled representation learning for non-parallel text style transfer

211Citations
Citations of this article
363Readers
Mendeley users who have this article in their library.

Abstract

This paper tackles the problem of disentangling the latent representations of style and content in language models. We propose a simple yet effective approach, which incorporates auxiliary multi-task and adversarial objectives, for style prediction and bag-of-words prediction, respectively. We show, both qualitatively and quantitatively, that the style and content are indeed disentangled in the latent space. This disentangled latent representation learning can be applied to style transfer on non-parallel corpora. We achieve high performance in terms of transfer accuracy, content preservation, and language fluency, in comparison to various previous approaches1.

Cite

CITATION STYLE

APA

John, V., Mou, L., Bahuleyan, H., & Vechtomova, O. (2020). Disentangled representation learning for non-parallel text style transfer. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 424–434). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1041

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free