Combining deep facial and ambient features for first impression estimation

40Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

First impressions influence the behavior of people towards a newly encountered person or a human-like agent. Apart from the physical characteristics of the encountered face, the emotional expressions displayed on it, as well as ambient information affect these impressions. In this work, we propose an approach to predict the first impressions people will have for a given video depicting a face within a context. We employ pre-trained Deep Convolutional Neural Networks to extract facial expressions, as well as ambient information. After video modeling, visual features that represent facial expression and scene are combined and fed to a Kernel Extreme Learning Machine regressor. The proposed system is evaluated on the ChaLearn Challenge Dataset on First Impression Recognition, where the classification target is the “Big Five” personality trait labels for each video. Our system achieved an accuracy of 90.94% on the sequestered test set, 0.36% points below the top system in the competition.

Cite

CITATION STYLE

APA

Gürpınar, F., Kaya, H., & Salah, A. A. (2016). Combining deep facial and ambient features for first impression estimation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9915 LNCS, pp. 372–385). Springer Verlag. https://doi.org/10.1007/978-3-319-49409-8_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free