Attention-Based Network for Cross-View Gait Recognition

7Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Existing gait recognition approaches based on CNN (Convolutional Neural Network) extract features from different human parts indiscriminately, without consideration of spatial heterogeneity. This may cause a loss of discriminative information for gait recognition, since different human parts vary in shape, movement constraints and so on. In this work, we devise an attention-based embedding network to address this problem. The attention module incorporated in our network assigns different saliency weights to different parts in feature maps at pixel level. The embedding network strives to embed gait features into low-dimensional latent space such that similarities can be simply measured by Euclidian distance. To achieve this goal, a combination of contrastive loss and triplet loss is utilized for training. Experiments demonstrate that our proposed network prevails over the state-of-the-art works on both OULP and MVLP dataset under cross-view conditions. Notably, we achieve 6.4% rank-1 recognition accuracy improvement under 90° angular difference on MVLP and 3.6% under 30° angular difference on OULP.

Cite

CITATION STYLE

APA

Huang, Y., Zhang, J., Zhao, H., & Zhang, L. (2018). Attention-Based Network for Cross-View Gait Recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11307 LNCS, pp. 489–498). Springer Verlag. https://doi.org/10.1007/978-3-030-04239-4_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free