Automatic extraction of affective metadata from videos through emotion recognition algorithms

4Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years, the diffusion of social networks has made available large amounts of user-generated data containing people’s opinions and feelings. Such data are mostly unstructured and hence need to be enriched with a large set of metadata to allow for efficient data indexing and querying. In this work we focus on videos and we extend traditional metadata extraction techniques by taking into account emotional metadata, in order to enable data analysis from an affective perspective. To this purpose, we present a 3-phase methodology for the automatic extraction of emotional metadata from videos through facial expression recognition algorithms. We also propose a simple but versatile model for metadata that takes into account variations in emotions among video chunks. Experiments on a real-world video dataset show that our non-linear classifier reaches a remarkable 72% classification accuracy in facial expression recognition.

Cite

CITATION STYLE

APA

Mircoli, A., & Cimini, G. (2018). Automatic extraction of affective metadata from videos through emotion recognition algorithms. In Communications in Computer and Information Science (Vol. 909, pp. 191–202). Springer Verlag. https://doi.org/10.1007/978-3-030-00063-9_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free