Encoding Sentences with a Syntax-Aware Self-attention Neural Network for Emotion Distribution Prediction

4Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Emotion distribution prediction aims to simultaneously identify multiple emotions and their intensities in a sentence. Recently, neural network models have been successfully applied in this task. However, most of them have not fully considered the sentence syntactic information. In this paper, we propose a syntax-aware self-attention neural network (SynSAN) that exploits syntactic features for emotion distribution prediction. In particular, we first explore a syntax-level self-attention layer over syntactic tree to learn the syntax-aware vector of each word by incorporating the dependency syntactic information from its parent and child nodes. Then we construct a sentence-level self-attention layer to compress syntax-aware vectors of words to the sentence representation used for emotion prediction. Experimental results on two public datasets show that our model can achieve better performance than the state-of-the-art models by large margins and requires less training parameters.

Cite

CITATION STYLE

APA

Wang, C., & Wang, B. (2020). Encoding Sentences with a Syntax-Aware Self-attention Neural Network for Emotion Distribution Prediction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12431 LNAI, pp. 256–266). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-60457-8_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free