Recognizing emotions based on human actions in videos

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Systems for automatic analysis of videos are in high demands as videos are expanding rapidly on the Internet and understanding of the emotions carried by the videos (e.g. “anger”, “happiness”) are becoming a hot topic. While existing affective computing model mainly focusing on facial expression recognition, little attempts have been made to explore the relationship between emotion and human action. In this paper, we propose a comprehensive emotion classification framework based on spatio-temporal volumes built with human actions. To each action unit we get before, we use Dense-SIFT as descriptor and K-means to form histograms. Finally, the histograms are sent to the mRVM and recognizing the human emotion. The experiment results show that our method performs well on FABO dataset.

Cite

CITATION STYLE

APA

Wang, G., Qin, Z., & Xu, K. (2017). Recognizing emotions based on human actions in videos. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10133 LNCS, pp. 306–317). Springer Verlag. https://doi.org/10.1007/978-3-319-51814-5_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free