StyLEx: Explaining Style Using Human Lexical Annotations

2Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Large pre-trained language models have achieved impressive results on various style classification tasks, but they often learn spurious domain-specific words to make predictions (Hayati et al., 2021). While human explanation highlights stylistic tokens as important features for this task, we observe that model explanations often do not align with them. To tackle this issue, we introduce StyLEx, a model that learns from human annotated explanations of stylistic features and jointly learns to perform the task and predict these features as model explanations. Our experiments show that StyLEx can provide human-like stylistic lexical explanations without sacrificing the performance of sentence-level style prediction on both in-domain and out-of-domain datasets. Explanations from StyLEx show significant improvements in explanation metrics (sufficiency, plausibility) and when evaluated with human annotations. They are also more understandable by human judges compared to the widely-used saliency-based explanation baseline.

Cite

CITATION STYLE

APA

Hayati, S. A., Park, K., Rajagopal, D., Ungar, L., & Kang, D. (2023). StyLEx: Explaining Style Using Human Lexical Annotations. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 2835–2848). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.eacl-main.208

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free