Face recognition could be a personal identification system that uses personal characteristics of an individual to spot the person’s identity. face recognition procedure primarily consists of 2 phases, firstly face detection, which identifies the portion of face in an image, second is the recognition, that recognize a face as people. In 2015 FaceNet introduced a new method [1] for face recognition achieving a new record accuracy at that time. The essence of the idea is to map the face images into a 128-dimensional embedding on a unit hypersphere. The relation between two pictures can be determined from the distance of their embeddings. If two embeddings are close to each other that means the persons on the pictures look similar. This was done in tensorflow, there are many algorithms[2] such as OpenFace[12] which tried to take FaceNet as the basis and tried to improve the results. Our goal is to create an implementation of the FaceNet solution in Keras, a deep learning library and to generate visualization for the 128th dimensional representation of the face images using the newly released UMAP algorithm[4].
CITATION STYLE
Gorijavaram, A., Ramanathan, L., Abdalla, H. B., Prabhakaran, N., Ramani, S., & Rajkumar, S. (2019). Face recognition using triplet loss function in keras. International Journal of Engineering and Advanced Technology, 8(3), 667–671.
Mendeley helps you to discover research relevant for your work.