AIMH at SemEval-2021 Task 6: Multimodal Classification Using an Ensemble of Transformer Models

7Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes the system used by the AIMH Team to approach the SemEval Task 6. We propose an approach that relies on an architecture based on the transformer model to process multimodal content (text and images) in memes. Our architecture, called DVTT (Double Visual Textual Transformer), approaches Subtasks 1 and 3 of Task 6 as multi-label classification problems, where the text and/or images of the meme are processed, and the probabilities of the presence of each possible persuasion technique are returned as a result. DVTT uses two complete networks of transformers that work on text and images that are mutually conditioned. One of the two modalities acts as the main one and the second one intervenes to enrich the first one, thus obtaining two distinct ways of operation. The two transformers outputs are merged by averaging the inferred probabilities for each possible label, and the overall network is trained end-to-end with a binary cross-entropy loss.

Cite

CITATION STYLE

APA

Messina, N., Falchi, F., Gennaro, C., & Amato, G. (2021). AIMH at SemEval-2021 Task 6: Multimodal Classification Using an Ensemble of Transformer Models. In SemEval 2021 - 15th International Workshop on Semantic Evaluation, Proceedings of the Workshop (pp. 1020–1026). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.semeval-1.140

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free