Facial action units for training convolutional neural networks

18Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper deals with the problem of training convolutional neural networks (CNNs) with facial action units (AUs). In particular, we focus on the imbalance problem of the training datasets for facial emotion classification. Since training a CNN with an imbalanced dataset tends to yield a learning bias toward the major classes and eventually leads to deterioration in the classification accuracy, it is required to increase the number of training images for the minority classes to have evenly distributed training images over all classes. However, it is difficult to find the images with a similar facial emotion for the oversampling. In this paper, we propose to use the AU features to retrieve an image with a similar emotion. The query selection from the minority class and the AU-based retrieval processes repeat until the numbers of training data over all classes are balanced. Also, to improve the classification accuracy, the AU features are fused with the CNN features to train a support vector machine (SVM) for final classification. The experiments have been conducted on three imbalanced facial image datasets, RAF-DB, FER2013, and ExpW. The results demonstrate that the CNNs trained with the AU features improve the classification accuracy by 3%-4%.

Cite

CITATION STYLE

APA

Pham, T. T. D., & Won, C. S. (2019). Facial action units for training convolutional neural networks. IEEE Access, 7, 77816–77824. https://doi.org/10.1109/ACCESS.2019.2921241

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free