Global Deep Feature Representation for Person Re-Identification

0Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Person re-identification (re-ID) has attracted tremendous attention in the field of computer vision, especially in intelligent visual surveillance (IVS). The propose of re-ID is to retrieval the interest person across different cameras. There are still lots of challenges and difficulties that are the same appearance such as clothes, the lens distance, various poses and different shooting angles, all of which influence the performance of re-ID. In this paper, we propose a novel architecture, called global deep convolutional network (GDCN), which applies classical convolutional network as the backbone network and calculates the similarity between query and gallery. We evaluate the proposed GDCN on three large-scale public datasets: Market-1501 by 92.72% in Rank-1 and 88.86% in mAP, CHUK03 by 60.78% in Rank-1 and 62.47% in mAP, DukeMTMC-re-ID by 82.22% in Rank-1 and 77.99% in mAP, respectively. Besides, we compare the experimental results with previous work to verify the state-of-art performance of the proposed method that is implemented by NVIDIA Ge-Force GTX 1080Ti.

Cite

CITATION STYLE

APA

Fu, M., Sun, S., Chen, N., Tong, X., Wu, X., Huang, Z., & Ni, K. (2020). Global Deep Feature Representation for Person Re-Identification. In Lecture Notes in Electrical Engineering (Vol. 571 LNEE, pp. 179–186). Springer. https://doi.org/10.1007/978-981-13-9409-6_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free