Incorporating stylistic lexical preferences in generative language models

2Citations
Citations of this article
65Readers
Mendeley users who have this article in their library.

Abstract

While recent advances in language modeling has resulted in powerful generation models, their generation style remains implicitly dependent on the training data and can not emulate a specific target style. Leveraging the generative capabilities of a transformer-based language models, we present an approach to induce certain target-author attributes by incorporating continuous multi-dimensional lexical preferences of an author into generative language models. We introduce rewarding strategies in a reinforcement learning framework that encourages the use of words across multiple categorical dimensions, to varying extents. Our experiments demonstrate that the proposed approach can generate text that distinctively aligns with a given target author’s lexical style. We conduct quantitative and qualitative comparisons with competitive and relevant baselines to illustrate the benefits of the proposed approach.

Cite

CITATION STYLE

APA

Singh, H., Verma, G., & Srinivasan, B. V. (2020). Incorporating stylistic lexical preferences in generative language models. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 1074–1079). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.96

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free