Person re-identification via learning visual similarity on corresponding patch pairs

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Since humans concentrate more on differences between relatively small but salient body regions in matching person across disjoint camera views, we propose these differences to be the most significant character in person re-identification (Re-ID). Unlike existing methods focusing on learning discriminative features to adapt viewpoint variation using global visual similarity, we propose a learning visual similarity algorithm via corresponding patch pairs (CPPs) for person Re-ID. The novel CPPs method is introduced to represent the corresponding body patches of the same person in different images with good robustness to body pose, viewpoint and illumination variations. The similarity between two people is measured by an improved bi-directional weight mechanism with a TF-IDF like patches weight. At last, a complementary similarity measure and a mutually-exclusive regulation are presented to enhance the performance of Re-ID. With quantitative evaluation on public datasets, the best rank-1 matching rate on the VIPeR dataset is improved by 4.14%.

Cite

CITATION STYLE

APA

Sheng, H., Huang, Y., Zheng, Y., Chen, J., & Xiong, Z. (2015). Person re-identification via learning visual similarity on corresponding patch pairs. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9403, pp. 787–798). Springer Verlag. https://doi.org/10.1007/978-3-319-25159-2_73

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free