To avoid the illumination limitation in visible person reidentification (Re-ID) and the heterogeneous issue in cross-modality Re-ID, we propose to utilize complementary advantages of multiple modalities including visible (RGB), near infrared (NI) and thermal infrared (TI) ones for robust person Re-ID. A novel progressive fusion network is designed to learn effective multi-modal features from single to multiple modalities and from local to global views. Our method works well in diversely challenging scenarios even in the presence of missing modalities. Moreover, we contribute a comprehensive benchmark dataset, RGBNT201, including 201 identities captured from various challenging conditions, to facilitate the research of RGB-NI-TI multi-modality person Re-ID. Comprehensive experiments on RGBNT201 dataset comparing to the state-of-the-art methods demonstrate the contribution of multi-modality person Re-ID and the effectiveness of the proposed approach, which launch a new benchmark and a new baseline for multi-modality person Re-ID.
CITATION STYLE
Zheng, A., Wang, Z., Chen, Z., Li, C., & Tang, J. (2021). Robust Multi-Modality Person Re-identification. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 4B, pp. 3529–3537). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i4.16467
Mendeley helps you to discover research relevant for your work.