Person re-identification (re-ID) has attracted tremendous attention in the field of computer vision, especially in intelligent visual surveillance (IVS). The propose of re-ID is to retrieval the interest person across different cameras. There are still lots of challenges and difficulties that are the same appearance such as clothes, the lens distance, various poses and different shooting angles, all of which influence the performance of re-ID. In this paper, we propose a novel architecture, called global deep convolutional network (GDCN), which applies classical convolutional network as the backbone network and calculates the similarity between query and gallery. We evaluate the proposed GDCN on three large-scale public datasets: Market-1501 by 92.72% in Rank-1 and 88.86% in mAP, CHUK03 by 60.78% in Rank-1 and 62.47% in mAP, DukeMTMC-re-ID by 82.22% in Rank-1 and 77.99% in mAP, respectively. Besides, we compare the experimental results with previous work to verify the state-of-art performance of the proposed method that is implemented by NVIDIA Ge-Force GTX 1080Ti.
CITATION STYLE
Fu, M., Sun, S., Chen, N., Tong, X., Wu, X., Huang, Z., & Ni, K. (2020). Global Deep Feature Representation for Person Re-Identification. In Lecture Notes in Electrical Engineering (Vol. 571 LNEE, pp. 179–186). Springer. https://doi.org/10.1007/978-981-13-9409-6_22
Mendeley helps you to discover research relevant for your work.