Facial attribute recognition is an important and yet challenging research topic. Different from most previous approaches which predict attributes only based on the whole images, this paper leverages facial parts locations for better attribute prediction. A facial abstraction image which contains both local facial parts and facial texture information is introduced. This abstraction image is generated by a Generative Adversarial Network (GAN). Then we build a dual-path facial attribute recognition network to utilize features from the original face images and facial abstraction images. Empirically, the features of facial abstraction images are complementary to features of original face images. With the facial parts localized by the abstraction images, our method improves facial attributes recognition, especially the attributes located on small face regions. Extensive evaluations conducted on CelebA and LFWA benchmark datasets show that state-of-the-art performance is achieved.
CITATION STYLE
He, K., Fu, Y., Zhang, W., Wang, C., Jiang, Y. G., Huang, F., & Xue, X. (2018). Harnessing synthesized abstraction images to improve facial attribute recognition. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 733–740). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/102
Mendeley helps you to discover research relevant for your work.