LT3 at SemEval-2020 Task 8: Multi-Modal Multi-Task Learning for Memotion Analysis

8Citations
Citations of this article
75Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Internet memes have become a very popular mode of expression on social media networks today. Their multi-modal nature, caused by a mixture of text and image, makes them a very challenging research object for automatic analysis. In this paper, we describe our contribution to the SemEval-2020 Memotion Analysis Task. We propose a Multi-Modal Multi-Task learning system, which incorporates “memebeddings”, viz. joint text and vision features, to learn and optimize for all three Memotion subtasks simultaneously. The experimental results show that the proposed system constantly outperforms the competition's baseline, and the system setup with continual learning (where tasks are trained sequentially) obtains the best classification F1-scores.

Cite

CITATION STYLE

APA

Singh, P., Bauwelinck, N., & Lefever, E. (2020). LT3 at SemEval-2020 Task 8: Multi-Modal Multi-Task Learning for Memotion Analysis. In 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings (pp. 1155–1162). International Committee for Computational Linguistics. https://doi.org/10.18653/v1/2020.semeval-1.153

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free