Representation of Lexical Stylistic Features in Language Models' Embedding Space

3Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The representation space of pretrained Language Models (LMs) encodes rich information about words and their relationships (e.g., similarity, hypernymy, polysemy) as well as abstract semantic notions (e.g., intensity). In this paper, we demonstrate that lexical stylistic notions such as complexity, formality, and figurativeness, can also be identified in this space. We show that it is possible to derive a vector representation for each of these stylistic notions from only a small number of seed pairs. Using these vectors, we can characterize new texts in terms of these dimensions by performing simple calculations in the corresponding embedding space. We conduct experiments on five datasets and find that static embeddings encode these features more accurately at the level of words and phrases, whereas contextualized LMs perform better on sentences. The lower performance of contextualized representations at the word level is partially attributable to the anisotropy of their vector space, which can be corrected to some extent using techniques like standardization.

Cite

CITATION STYLE

APA

Lyu, Q., Apidianaki, M., & Callison-Burc, C. (2023). Representation of Lexical Stylistic Features in Language Models’ Embedding Space. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 370–387). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.starsem-1.32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free