Co-Attentive Lifting for Infrared-Visible Person Re-Identification

45Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Infrared-visible cross-modality person re-identification (IV-ReID) has attracted much attention with the popularity of dual-mode video surveillance systems, where the RGB mode works in the daytime and automatically switches to the infrared mode at night. Despite its significant application value, IV-ReID remains a difficult problem mainly due to two great challenges. First, it is difficult to identify persons in the infrared image, which lacks color and texture clues. Second, there is a significant gap between the infrared and visible modalities where appearances of the same person vary considerably. This paper proposes a novel attention-based approach to handle the two difficulties in a unified framework. 1) We propose an attention lifting mechanism to learn discriminative features in each modality. 2) We propose a co-Attentive learning mechanism to bridge the gap between the two modalities. Our method only makes slight modifications of a given backbone network and requires small computation overhead while improving the performance significantly. We conduct extensive experiments to demonstrate the superiority of our proposed method.

Cite

CITATION STYLE

APA

Wei, X., Li, D., Hong, X., Ke, W., & Gong, Y. (2020). Co-Attentive Lifting for Infrared-Visible Person Re-Identification. In MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia (pp. 1028–1037). Association for Computing Machinery, Inc. https://doi.org/10.1145/3394171.3413933

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free