Infrared-Visible person Re-IDentification (IV-ReID) is an emerging subject, which has important research significance for nighttime monitoring. Existing works focus on reducing cross-modality discrepancies, but the cross-modality discrepancy cannot be completely eliminated. Therefore, we concentrate on excavating cross-modality commonalities to handle the task. Since similar features between two modalities are possessed of cross-modality commonalities, our goal is to find more similar features in infrared and visible images. A novel Dual-stream Multi-layer Corresponding Fusion Network(DMCF) is proposed to explore more similar features between two modalities in this paper. It mainly contains three aspects. 1) We explore more similar features between two modalities by learning low-level features, Meanwhile, we also propose a method that the same level features between two modalities are correspondingly fused to reduce cross-modality discrepancies. 2) We adopt different Multi-granularity dividing methods for multi-layer features, so that it can improve the ability of the model to perceive feature details. 3) We separately calculate the loss for different-layer features. Therefore, we learn different weighting factors for the loss of different hierarchical features through Multi-task Learning, so that each branch can be fully optimized. Extensive experiments on two datasets demonstrate the superior performance compared to the state-of-the-arts.
CITATION STYLE
Cheng, D., Li, X., Qi, M., Liu, X., Chen, C., & Niu, D. (2020). Exploring Cross-Modality Commonalities via Dual-Stream Multi-Branch Network for Infrared-Visible Person Re-Identification. IEEE Access, 8, 12824–12834. https://doi.org/10.1109/ACCESS.2020.2966002
Mendeley helps you to discover research relevant for your work.