Deep‐learning‐based multimodal emotion classification for music videos

57Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.

Abstract

Music videos contain a great deal of visual and acoustic information. Each information source within a music video influences the emotions conveyed through the audio and video, sug-gesting that only a multimodal approach is capable of achieving efficient affective computing. This paper presents an affective computing system that relies on music, video, and facial expression cues, making it useful for emotional analysis. We applied the audio–video information exchange and boosting methods to regularize the training process and reduced the computational costs by using a separable convolution strategy. In sum, our empirical findings are as follows: (1) Multimodal representations efficiently capture all acoustic and visual emotional clues included in each music video, (2) the computational cost of each neural network is significantly reduced by factorizing the stand-ard 2D/3D convolution into separate channels and spatiotemporal interactions, and (3) information-sharing methods incorporated into multimodal representations are helpful in guiding individual information flow and boosting overall performance. We tested our findings across several unimodal and multimodal networks against various evaluation metrics and visual analyzers. Our best classi-fier attained 74% accuracy, an f1‐score of 0.73, and an area under the curve score of 0.926.

References Powered by Scopus

Squeeze-and-Excitation Networks

26839Citations
N/AReaders
Get full text

A circumplex model of affect

11664Citations
N/AReaders
Get full text

Learning spatiotemporal features with 3D convolutional networks

8000Citations
N/AReaders
Get full text

Cited by Powered by Scopus

ViTFER: Facial Emotion Recognition with Vision Transformers

65Citations
N/AReaders
Get full text

Adaptive Multimodal Emotion Detection Architecture for Social Robots

58Citations
N/AReaders
Get full text

Adaptive neural decision tree for EEG based emotion recognition

50Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Pandeya, Y. R., Bhattarai, B., & Lee, J. (2021). Deep‐learning‐based multimodal emotion classification for music videos. Sensors, 21(14). https://doi.org/10.3390/s21144927

Readers over time

‘21‘22‘23‘24‘2508162432

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 10

77%

Professor / Associate Prof. 1

8%

Lecturer / Post doc 1

8%

Researcher 1

8%

Readers' Discipline

Tooltip

Computer Science 5

33%

Engineering 5

33%

Arts and Humanities 3

20%

Neuroscience 2

13%

Save time finding and organizing research with Mendeley

Sign up for free
0