SafeNet: Scale-normalization and Anchor-based Feature Extraction Network for Person Re-identification

5Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Person Re-identification (ReID) is a challenging retrieval task that requires matching a person's image across non-overlapping camera views. The quality of fulfilling this task is largely determined on the robustness of the features that are used to describe the person. In this paper, we show the advantage of jointly utilizing multi-scale abstract information to learn powerful features over full body and parts. A scale normalization module is proposed to balance different scales through residual-based integration. To exploit the information hidden in non-rigid body parts, we propose an anchor-based method to capture the local contents by stacking convolutions of kernels with various aspect ratios, which focus on different spatial distributions. Finally, a well-defined framework is constructed for simultaneously learning the representations of both full body and parts. Extensive experiments conducted on current challenging large-scale person ReID datasets, including Market1501, CUHK03 and DukeMTMC, demonstrate that our proposed method achieves the state-of-the-art results.

Cite

CITATION STYLE

APA

Yuan, K., Zhang, Q., Huang, C., Xiang, S., & Pan, C. (2018). SafeNet: Scale-normalization and Anchor-based Feature Extraction Network for Person Re-identification. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 1121–1127). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/156

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free