Effective deep multi-source multi-Task learning frameworks for smile detection, emotion recognition and gender classification

7Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

Automatic human facial recognition has been an active reasearch topic with various potential applications. In this paper, we propose effective multi-Task deep learning frameworks which can jointly learn representations for three tasks: smile detection, emotion recognition and gender classification. In addition, our frameworks can be learned from multiple sources of data with different kinds of task-specific class labels. The extensive experiments show that our frameworks achieve superior accuracy over recent state-of-The-Art methods in all of three tasks on popular benchmarks. We also show that the joint learning helps the tasks with less data considerably benefit from other tasks with richer data.

Cite

CITATION STYLE

APA

Sang, D. V., & Cuong, L. T. B. (2018). Effective deep multi-source multi-Task learning frameworks for smile detection, emotion recognition and gender classification. In Informatica (Slovenia) (Vol. 42, pp. 345–356). Slovene Society Informatika. https://doi.org/10.31449/inf.v42i3.2301

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free