Pose-normalized image generation for person re-identification

79Citations
Citations of this article
263Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Person Re-identification (re-id) faces two major challenges: the lack of cross-view paired training data and learning discriminative identity-sensitive and view-invariant features in the presence of large pose variations. In this work, we address both problems by proposing a novel deep person image generation model for synthesizing realistic person images conditional on the pose. The model is based on a generative adversarial network (GAN) designed specifically for pose normalization in re-id, thus termed pose-normalization GAN (PN-GAN). With the synthesized images, we can learn a new type of deep re-id features free of the influence of pose variations. We show that these features are complementary to features learned with the original images. Importantly, a more realistic unsupervised learning setting is considered in this work, and our model is shown to have the potential to be generalizable to a new re-id dataset without any fine-tuning. The codes will be released at https://github.com/naiq/PN_GAN.

Author supplied keywords

Cite

CITATION STYLE

APA

Qian, X., Fu, Y., Xiang, T., Wang, W., Qiu, J., Wu, Y., … Xue, X. (2018). Pose-normalized image generation for person re-identification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11213 LNCS, pp. 661–678). Springer Verlag. https://doi.org/10.1007/978-3-030-01240-3_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free