Sentiment analysis, being one of the most sought after research problems within Natural Language Processing (NLP) researchers. The range of problems being addressed by sentiment analysis is ever increasing. Till now, most of the research focuses on predicting sentiment, or sentiment categories like sarcasm, humor, offense and motivation on text data. But, there is very limited research that is focusing on predicting or analyzing the sentiment of internet memes. We try to address this problem as part of “Task 8 of SemEval 2020: Memotion Analysis” (Sharma et al., 2020). We have participated in all the three tasks of Memotion Analysis. Our system built using state-of-the-art pre-trained Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) performed better compared to baseline models for the two tasks A and C and performed close to the baseline model for task B. In this paper, we present the data used for training, data cleaning and preparation steps, the fine-tuning process of BERT based model and finally predict the sentiment or sentiment categories. We found that the sequence models like Long Short Term Memory(LSTM) (Hochreiter and Schmidhuber, 1997) and its variants performed below par in predicting the sentiments. We also performed a comparative analysis with other Transformer based models like DistilBERT (Sanh et al., 2019) and XLNet (Yang et al., 2019).
CITATION STYLE
Avvaru, A., & Vobilisetty, S. (2020). BERT at SemEval-2020 Task 8: Using BERT to analyse meme emotions. In 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings (pp. 1094–1099). International Committee for Computational Linguistics. https://doi.org/10.18653/v1/2020.semeval-1.144
Mendeley helps you to discover research relevant for your work.