Exploring Non-Autoregressive Text Style Transfer

7Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we explore Non-AutoRegressive (NAR) decoding for unsupervised text style transfer. We first propose a base NAR model by directly adapting the common training scheme from its AutoRegressive (AR) counterpart. Despite the faster inference speed over the AR model, this NAR model sacrifices its transfer performance due to the lack of conditional dependence between output tokens. To this end, we investigate three techniques, i.e., knowledge distillation, contrastive learning, and iterative decoding, for performance enhancement. Experimental results on two benchmark datasets suggest that, although the base NAR model is generally inferior to AR decoding, their performance gap can be clearly narrowed when empowering NAR decoding with knowledge distillation, contrastive learning, and iterative decoding.

Cite

CITATION STYLE

APA

Ma, Y., & Li, Q. (2021). Exploring Non-Autoregressive Text Style Transfer. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 9267–9278). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.730

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free