Cycle-Consistent Adversarial Autoencoders for Unsupervised Text Style Transfer

21Citations
Citations of this article
79Readers
Mendeley users who have this article in their library.

Abstract

Unsupervised text style transfer is full of challenges due to the lack of parallel data and difficulties in content preservation. In this paper, we propose a novel neural approach to unsupervised text style transfer which we refer to as Cycle-consistent Adversarial autoEncoders (CAE) trained from non-parallel data. CAE consists of three essential components: (1) LSTM autoencoders that encode a text in one style into its latent representation and decode an encoded representation into its original text or a transferred representation into a style-transferred text, (2) adversarial style transfer networks that use an adversarially trained generator to transform a latent representation in one style into a representation in another style, and (3) a cycle-consistent constraint that enhances the capacity of the adversarial style transfer networks in content preservation. The entire CAE with these three components can be trained end-to-end. Extensive experiments and in-depth analyses on two widely-used public datasets consistently validate the effectiveness of proposed CAE in both style transfer and content preservation against several strong baselines in terms of four automatic evaluation metrics and human evaluation.

Cite

CITATION STYLE

APA

Huang, Y., Zhu, W., Xiong, D., Zhang, Y., Hu, C., & Xu, F. (2020). Cycle-Consistent Adversarial Autoencoders for Unsupervised Text Style Transfer. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 2213–2223). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.201

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free