Sarcasm Detection Using Multi-Head Attention Based Bidirectional LSTM

197Citations
Citations of this article
208Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Sarcasm is often used to express a negative opinion using positive or intensified positive words in social media. This intentional ambiguity makes sarcasm detection, an important task of sentiment analysis. Sarcasm detection is considered a binary classification problem wherein both feature-rich traditional models and deep learning models have been successfully built to predict sarcastic comments. In previous research works, models have been built using lexical, semantic and pragmatic features. We extract the most significant features and build a feature-rich SVM that outperforms these models. In this paper, we introduce a multi-head attention-based bidirectional long-short memory (MHA-BiLSTM) network to detect sarcastic comments in a given corpus. The experiment results reveal that a multi-head attention mechanism enhances the performance of BiLSTM, and it performs better than feature-rich SVM models.

Cite

CITATION STYLE

APA

Kumar, A., Narapareddy, V. T., Srikanth, V. A., Malapati, A., & Neti, L. B. M. (2020). Sarcasm Detection Using Multi-Head Attention Based Bidirectional LSTM. IEEE Access, 8, 6388–6397. https://doi.org/10.1109/ACCESS.2019.2963630

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free