Person re-identification (ReID) is one of the commonly used criminal investigation methods in reconnaissance. Although the current ReID has achieved robust results on single domains, the focus of researches has shifted to cross-domain in recent years, which is caused by domain bias between different datasets. Generative Adversarial Networks (GAN) is used to realize the image style transfer of different datasets to alleviate the effect of cross-domain. However, the existing GAN-based models ignore complete expressions and occlusion of pedestrian characteristics, resulting in low accuracy in feature extraction. To address these issues, we introduce a cross domain model based on feature fusion (FFGAN) to fuse global, local and semantic features to extract more delicate pedestrian features. Before extracting pedestrian features, we preprocess feature maps with a feature erasure block to solve an occlusion issue. Finally, FFGAN enables a more complete visual description of pedestrian characteristics, thereby improving the accuracy of FFGAN in identifying pedestrians. Experimental results show that the effect of FFGAN is significantly improved compared with some advanced cross-domain ReID algorithms.
CITATION STYLE
Luo, X., Ouyang, Z., Du, N., Song, J., & Wei, Q. (2021). Cross-Domain Person Re-Identification Based on Feature Fusion. IEEE Access, 9, 98327–98336. https://doi.org/10.1109/ACCESS.2021.3091647
Mendeley helps you to discover research relevant for your work.