Multimodal emotion recognition from art using sequential co-attention

22Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

In this study, we present a multimodal emotion recognition architecture that uses both feature-level attention (sequential co-attention) and modality attention (weighted modality fusion) to classify emotion in art. The proposed architecture helps the model to focus on learning informative and refined representations for both feature extraction and modality fusion. The resulting system can be used to categorize artworks according to the emotions they evoke; recommend paintings that accentuate or balance a particular mood; search for paintings of a particular style or genre that represents custom content in a custom state of impact. Experimental results on the WikiArt emotion dataset showed the efficiency of the approach proposed and the usefulness of three modalities in emotion recognition.

Cite

CITATION STYLE

APA

Tashu, T. M., Hajiyeva, S., & Horvath, T. (2021). Multimodal emotion recognition from art using sequential co-attention. Journal of Imaging, 7(8). https://doi.org/10.3390/jimaging7080157

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free