Human facial emotion recognition using adaptive sigmoidal transfer function in MLP neural network

Citations of this article
Mendeley users who have this article in their library.
Get full text


The human face is very sensitive towards inner feelings particularly with different state of mind under various conditions. The facial expression has used in computer vision to understand the human response against stimuli. But the facial expression is also having the nature of variability and controllability hence its complete generalization from a computer vision point of view is very difficult and challenging, though acceptable performances can be achieved. In this paper, a two-stage based facial expression recognition model which carry the Principal component analysis as a feature extractor in the first stage and self-adaptive based activation function in feedforward neural network as a classifier in the second stage have applied. Use of principal component analysis reduces the dimension of features while the adaptive slope of transfer function provides another parameter along with weights to change in making learning faster and accurate. Six most dominant state of facial emotion like angry, surprise, sadness, normal, happy and fear have considered in this paper and performances have been tested over variable expressions. The benefit of the proposed model of self-adaptive activation function has verified through the benchmark XOR problem classification.




Unnisa, I., & Loganathan, R. (2019). Human facial emotion recognition using adaptive sigmoidal transfer function in MLP neural network. International Journal of Engineering and Advanced Technology, 9(1), 4103–4113.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free