We present a computer vision-based system named Anubhav (a Hindi word meaning feeling) which recognizes emotional facial expressions from streaming face videos. Our system runs at a speed of 10 frames per second (fps) on a 3.2-GHz desktop and at 3 fps on an Android mobile device. Using entropy and correlation-based analysis, we show that some particular salient regions of face image carry major expression-related information compared with other face regions. We also show that spatially close features within a salient face region carry correlated information regarding expression. Therefore, only a few features from each salient face region are enough for expression representation. Extraction of only a few features considerably saves response time. Exploitation of expression information from spatial as well as temporal dimensions gives good recognition accuracy. We have done extensive experiments on two publicly available data sets and also on live video streams. The recognition accuracies on benchmark CK+ data set and on live video stream by our system are at least 13 and 20 % better, respectively, compared to competing approaches.
CITATION STYLE
Agarwal, S., Santra, B., & Mukherjee, D. P. (2018). Anubhav: recognizing emotions through facial expression. Visual Computer, 34(2), 177–191. https://doi.org/10.1007/s00371-016-1323-z
Mendeley helps you to discover research relevant for your work.