A Loss Function Base on Softmax for Expression Recognition

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Benefiting from deep learning, the accuracy of face expression recognition tasks based on convolutional neural networks has been greatly improved. However, the traditional SoftMax activation function lacks the ability to discriminate between classes. To solve this problem, the industry has proposed several activation functions based on softmax, such as A-softmax, LMCL, etc. We investigate the geometric significance of the weights from a fully connected layer and consider the weights as the class centers. By extracting the feature vector of several samples and extending the corresponding means to the weights, the model can develop the ability to recognize custom classes without training, while maintaining the accuracy of the original classification. On the expression task, the original seven-category classification is validated to obtain 97.10% accuracy on the CK+ dataset and 88% accuracy on the custom dataset.

Cite

CITATION STYLE

APA

Lu, J., & Wu, B. (2022). A Loss Function Base on Softmax for Expression Recognition. Mobile Information Systems, 2022. https://doi.org/10.1155/2022/8230154

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free