Music videos contain a great deal of visual and acoustic information. Each information source within a music video influences the emotions conveyed through the audio and video, sug-gesting that only a multimodal approach is capable of achieving efficient affective computing. This paper presents an affective computing system that relies on music, video, and facial expression cues, making it useful for emotional analysis. We applied the audio–video information exchange and boosting methods to regularize the training process and reduced the computational costs by using a separable convolution strategy. In sum, our empirical findings are as follows: (1) Multimodal representations efficiently capture all acoustic and visual emotional clues included in each music video, (2) the computational cost of each neural network is significantly reduced by factorizing the stand-ard 2D/3D convolution into separate channels and spatiotemporal interactions, and (3) information-sharing methods incorporated into multimodal representations are helpful in guiding individual information flow and boosting overall performance. We tested our findings across several unimodal and multimodal networks against various evaluation metrics and visual analyzers. Our best classi-fier attained 74% accuracy, an f1‐score of 0.73, and an area under the curve score of 0.926.
CITATION STYLE
Pandeya, Y. R., Bhattarai, B., & Lee, J. (2021). Deep‐learning‐based multimodal emotion classification for music videos. Sensors, 21(14). https://doi.org/10.3390/s21144927
Mendeley helps you to discover research relevant for your work.