Unsupervised Domain Adaptation for Facial Expression Recognition Using Generative Adversarial Networks

57Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In the facial expression recognition task, a good-performing convolutional neural network (CNN) model trained on one dataset (source dataset) usually performs poorly on another dataset (target dataset). This is because the feature distribution of the same emotion varies in different datasets. To improve the cross-dataset accuracy of the CNN model, we introduce an unsupervised domain adaptation method, which is especially suitable for unlabelled small target dataset. In order to solve the problem of lack of samples from the target dataset, we train a generative adversarial network (GAN) on the target dataset and use the GAN generated samples to fine-tune the model pretrained on the source dataset. In the process of fine-tuning, we give the unlabelled GAN generated samples distributed pseudolabels dynamically according to the current prediction probabilities. Our method can be easily applied to any existing convolutional neural networks (CNN). We demonstrate the effectiveness of our method on four facial expression recognition datasets with two CNN structures and obtain inspiring results.

Cite

CITATION STYLE

APA

Wang, X., Wang, X., & Ni, Y. (2018). Unsupervised Domain Adaptation for Facial Expression Recognition Using Generative Adversarial Networks. Computational Intelligence and Neuroscience, 2018. https://doi.org/10.1155/2018/7208794

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free