Autoregressive Affective Language Forecasting: A Self-Supervised Task

7Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.

Abstract

Human natural language is mentioned at a specific point in time while human emotions change over time. While much work has established a strong link between language use and emotional states, few have attempted to model emotional language in time. Here, we introduce the task of affective language forecasting – predicting future change in language based on past changes of language, a task with real-world applications such as treating mental health or forecasting trends in consumer confidence. We establish some of the fundamental autoregressive characteristics of the task (necessary history size, static versus dynamic length, varying time-step resolutions) and then build on popular sequence models for words to instead model sequences of language-based emotion in time. Over a novel Twitter dataset of 1,900 users and weekly + daily scores for 6 emotions and 2 additional linguistic attributes, we find a novel dual-sequence GRU model with decayed hidden states achieves best results (r = .66). We make our anonymized dataset as well as task setup and evaluation code available for others to build on.

Cite

CITATION STYLE

APA

Matero, M., & Schwartz, H. A. (2020). Autoregressive Affective Language Forecasting: A Self-Supervised Task. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 2913–2923). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.261

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free