Appearance and motion enhancement for video-based person re-identification

30Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we propose an Appearance and Motion Enhancement Model (AMEM) for video-based person reidentification to enrich the two kinds of information contained in the backbone network in a more interpretable way. Concretely, human attribute recognition under the supervision of pseudo labels is exploited in an Appearance Enhancement Module (AEM) to help enrich the appearance and semantic information. A Motion Enhancement Module (MEM) is designed to capture the identity-discriminative walking patterns through predicting future frames. Despite a complex model with several auxiliary modules during training, only the backbone model plus two small branches are kept for similarity evaluation which constitute a simple but effective final model. Extensive experiments conducted on three popular video-based person ReID benchmarks demonstrate the effectiveness of our proposed model and the state-of-the-art performance compared with existing methods.

Cite

CITATION STYLE

APA

Li, S., Yu, H., & Hu, H. (2020). Appearance and motion enhancement for video-based person re-identification. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 11394–11404). AAAI press. https://doi.org/10.1609/aaai.v34i07.6802

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free