Deep metric learning for video-based person re-identification

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

Abstract

This paper proposes a novel approach for video-based person re-identification that exploits deep convolutional neural networks to learn the similarity of persons observed from video camera. By Convolutional Neural Networks (CNN), each video sequence of a person is mapped to a Euclidean space where distances between feature embeddings directly correspond to measures of person similarity. By improved parameter learning method called Entire Triplet Loss, all possible triplets in the mini-batch are taken into account to update network parameters at once. This simple change of parameter updating method significantly improves network training, enabling the embeddings to be further discriminative. Experimental results show that proposed model achieves new state of the art identification rate on iLEDS-VID dataset and PRID-2011 dataset with 78.3%, 83.9% at rank 1, respectively.

Cite

CITATION STYLE

APA

Kato, N., Hakozaki, K., Tanabiki, M., Furuyama, J., Sato, Y., & Aoki, Y. (2017). Deep metric learning for video-based person re-identification. Seimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering, 83(12), 1117–1124. https://doi.org/10.2493/jjspe.83.1117

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free