Evade the Trap of Mediocrity: Promoting Diversity and Novelty in Text Generation via Concentrating Attention

1Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Recently, powerful Transformer architectures have proven superior in generating high-quality sentences. Nevertheless, these models tend to produce dull high-frequency phrases, severely hurting the diversity and novelty of generated text. In this work, we dig into the intrinsic mechanism of this problem and found that sparser attention values in Transformer could improve diversity. To understand such a phenomenon, we first conduct both empirical and theoretical analysis and then attribute it to representation degeneration caused by the attentive mixture of the hidden states during training. We term this process the Trap of Mediocrity. To escape from such a trap, we introduce a novel attention regularization loss to control the sharpness of the attention distribution, which is transparent to model structures and can be easily implemented within 20 lines of python code. We prove that this method could be mathematically regarded as learning a Bayesian approximation of posterior attention. Experiments show that our method improved the diversity and novelty of the generated text while maintaining comparable quality on a variety of conditional and unconditional generation tasks.

Cite

CITATION STYLE

APA

Li, W., Yi, X., Hu, J., Sun, M., & Xie, X. (2022). Evade the Trap of Mediocrity: Promoting Diversity and Novelty in Text Generation via Concentrating Attention. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 10834–10858). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.745

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free