In natural language the intended meaning of a word or phrase is often implicit and depends on the context. In this work, we propose a simple yet effective method for sentiment analysis using contextual embeddings and a self-attention mechanism. The experimental results for three languages, including morphologically rich Polish and German, show that our model is comparable to or even outperforms state-of-the-art models. In all cases the superiority of models leveraging contextual embeddings is demonstrated. Finally, this work is intended as a step towards introducing a universal, multilingual sentiment classifier.
CITATION STYLE
Biesialska, K., Biesialska, M., & Rybinski, H. (2020). Sentiment Analysis with Contextual Embeddings and Self-attention. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12117 LNAI, pp. 32–41). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59491-6_4
Mendeley helps you to discover research relevant for your work.