DynFace: A Multi-label, Dynamic-Margin-Softmax Face Recognition Model

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Convolutional neural networks (CNN), more recently, have greatly increased the performance of face recognition due to its high capability in learning discriminative features. Many of the initial face recognition algorithms reported high performance in the small size Labeled Faces in the Wild (LFW) dataset but fail to deliver same results on larger or different datasets. Ongoing research tries to boost the performance of Face Recognition methods by modifying either the neural network structure or the loss function. This paper proposes two novel additions to the typical softmax CNN used for face recognition: a fusion of facial attributes at feature level and a dynamic margin softmax loss. The new network DynFace was extensively evaluated on extended LFW and much larger MegaFace, comparing its performance against known algorithms. The DynFace achieved state-of-art accuracy at high speed. Results obtained during the carefully designed test experiments, are presented in the end of this paper.

Cite

CITATION STYLE

APA

Cordea, M., Ionescu, B., Gadea, C., & Ionescu, D. (2020). DynFace: A Multi-label, Dynamic-Margin-Softmax Face Recognition Model. In Advances in Intelligent Systems and Computing (Vol. 943, pp. 535–550). Springer Verlag. https://doi.org/10.1007/978-3-030-17795-9_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free