A Comparative Analysis of Local Explainability of Models for Sentiment Detection

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sentiment analysis is one of the crucial tasks in Natural Language Processing (NLP) which refers to classifying natural language sentences by their positive or negative sentiments. In many existing deep learning-based models, providing an explanation of a sentiment might be as necessary as the prediction itself. In this study, we use four different classification models applied to the sentiment analysis of the Internet Movie Database (IMDB) reviews, and investigate the explainability of results using Local Interpretable Model-agnostic Explanation (LIME). Our results reveal how the attention-based models, such as Bidirectional LSTM (BiLSTM) and fine-tuned Bidirectional Encoder Representations from Transformers (BERT) would focus on the most relevant keywords.

Cite

CITATION STYLE

APA

Hajiyan, H., Davoudi, H., & Ebrahimi, M. (2023). A Comparative Analysis of Local Explainability of Models for Sentiment Detection. In Lecture Notes in Networks and Systems (Vol. 561 LNNS, pp. 593–606). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-18344-7_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free