An Evaluation of Disentangled Representation Learning for Texts

2Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Learning disentangled representations of texts, which encode information pertaining to different aspects of the text in separate representations, is an active area of research in NLP for controllable and interpretable text generation. These methods have, for the most part, been developed in the context of text style transfer, but are limited in their evaluation. In this work, we look at the motivation behind learning disentangled representations of content and style for texts and at the potential use-cases when compared to end-to-end methods. We then propose evaluation metrics that correspond to these use-cases. We conduct a systematic investigation of previously proposed loss functions for such models and we evaluate them on a highly-structured and synthetic natural language dataset that is well-suited for the task of disentangled representation learning, as well as two other parallel style transfer datasets. Our results demonstrate that current models still require considerable amounts of supervision in order to achieve good performance.

Cite

CITATION STYLE

APA

Vishnubhotla, K., Hirst, G., & Rudzicz, F. (2021). An Evaluation of Disentangled Representation Learning for Texts. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 1939–1951). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.170

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free