Cross-Camera erased feature learning for unsupervised person re-identification

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Most supervised person re-identification methods show their excellent performance, but using labeled datasets is very expensive, which limits its application in practical scenarios. To solve the scalability problem, we propose a Cross-camera Erased Feature Learning (CEFL) framework for unsupervised person re-identification that learns discriminative features from image appearances without manual annotations, where both of the cross-camera global image appearance and the local details are explored. Specifically, for the global appearance, in order to bridge the gap between images with the same identities under different cameras, we generate style-transferred images. The network is trained to classify the original images, the style-transferred images and the negative samples. To learn the partial details of the images, we generate erased images and train the network to pull the similar erased images together and push the dissimilar ones away. In addition, we joint learn the discriminative global and local information to learn a more robust model. Global and erased features are used together in feature learning which are successful conjunction of BFENet. A large number of experiments show the superiority of CEFL in unsupervised pedestrian re-identification.

Cite

CITATION STYLE

APA

Wu, S., & Gao, L. (2020). Cross-Camera erased feature learning for unsupervised person re-identification. Algorithms, 13(8). https://doi.org/10.3390/A13080193

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free