Camera Style Guided Feature Generation for Person Re-identification

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Camera variance has always been a troublesome matter in person re-identification (re-ID). Recently, more and more interests have grown in alleviating the camera variance problem by data augmentation through generative models. However, these methods, mostly based on image-level generative adversarial networks (GANs), require huge computational power during the training process of generative models. In this paper, we propose to solve the person re-ID problem by adopting a feature level camera-style guided GAN, which can serve as an intra-class augmentation method to enhance the model robustness against camera variance. Specifically, the proposed method makes camera-style transfer on input features while preserving the corresponding identity information. Moreover, the training process can be directly injected into the re-ID task in an end-to-end manner, which means we can deploy our methods with much less time and space costs. Experiments show the validity of the generative model and its benefits towards re-ID performance on Market-1501 and DukeMTMC-reID datasets.

Cite

CITATION STYLE

APA

Hu, H., Liu, Y., Lv, K., Zheng, Y., Zhang, W., Ke, W., & Sheng, H. (2020). Camera Style Guided Feature Generation for Person Re-identification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12384 LNCS, pp. 158–169). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59016-1_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free