Improving the Generalizability of Text-Based Emotion Detection by Leveraging Transformers with Psycholinguistic Features

2Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

Abstract

In recent years, there has been increased interest in building predictive models that harness natural language processing and machine learning techniques to detect emotions from various text sources, including social media posts, micro-blogs or news articles. Yet, deployment of such models in real-world sentiment and emotion applications faces challenges, in particular poor out-of-domain generalizability. This is likely due to domain-specific differences (e.g., topics, communicative goals, and annotation schemes) that make transfer between different models of emotion recognition difficult. In this work we propose approaches for text-based emotion detection that leverage transformer models (BERT and RoBERTa) in combination with Bidirectional Long Short-Term Memory (BiLSTM) networks trained on a comprehensive set of psycholinguistic features. First, we evaluate the performance of our models within-domain on two benchmark datasets: GoEmotion (Demszky et al., 2020) and ISEAR (Scherer and Wallbott, 1994). Second, we conduct transfer learning experiments on six datasets from the Unified Emotion Dataset (Bostan and Klinger, 2018) to evaluate their out-of-domain robustness. We find that the proposed hybrid models improve the ability to generalize to out-of-distribution data compared to a standard transformer-based approach. Moreover, we observe that these models perform competitively on in-domain data.

Cite

CITATION STYLE

APA

Zanwar, S., Wiechmann, D., Qiao, Y., & Kerz, E. (2022). Improving the Generalizability of Text-Based Emotion Detection by Leveraging Transformers with Psycholinguistic Features. In NLPCSS 2022 - 5th Workshop on Natural Language Processing and Computational Social Science ,NLP+CSS, Held at the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 1–13). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.nlpcss-1.1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free