An Automatic Multimedia Likability Prediction System Based on Facial Expression of Observer

8Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Every individual's perception of multimedia content varies based on their interpretation. Therefore, it is quite challenging to predict likability of any multimedia just based on its content. This paper presents a novel system for analysis of facial expressions of subject against the multimedia content to be evaluated. First, we developed a dataset by recording facial expressions of subjects under uncontrolled environment. These subjects are volunteers recruited to watch the videos of different genre, and provide their feedback in terms of likability. Subject responses are divided into three categories: Like, Neutral and Dislike. A novel multimodal system is developed using the developed dataset. The model learns feature representation from data based on the three provided categories. The proposed system contains ensemble of time distributed convolutional neural network, 3D convolutional neural network, and long short term memory networks. All the modalities in proposed architecture are evaluated independently as well as in distinct combinations. The paper also provides detailed insight into learning behavior of the proposed system.

Cite

CITATION STYLE

APA

Bawa, V. S., Sharma, S., Usman, M., Gupta, A., & Kumar, V. (2021). An Automatic Multimedia Likability Prediction System Based on Facial Expression of Observer. IEEE Access, 9, 110421–110434. https://doi.org/10.1109/ACCESS.2021.3102042

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free