In real person re-identification (ReID) tasks, pedestrians are often obscured by other pedestrians or objects; moreover, changes in poses or observation perspectives also commonly exist in partial-person ReID. To the best of our knowledge, few works simultaneously focus on these two issues. In this work, we propose a novel texture semantic alignment (TSA) approach with the visibility-aware for partial person ReID task where the occlusion issue and changes in poses are simultaneously explored in an end-to-end unified framework. Specifically, we first employ a texture alignment scheme with the semantic visibility of a person's image to solve the issue of changes in poses that can enhance the alignment and generalization capability of the models. Second, we design a human pose-based partial region alignment scheme to solve the occlusion problem that makes TSA method emphasize the shared body parts. Finally, these two networks jointly learn these aspects. Extensive experimental results demonstrate that our proposed TSA method is very effective and robust for simultaneously handling occlusion and changes in pose, and it can outperform state-of-the-art approaches by a large margin and achieves an improvement of 5% and 6.4% on the rank-1 accuracy over the visibility-aware part model (VPM) method (published in CVPR 2019) on the Partial ReID and Partial-iLIDS datasets, respectively.
CITATION STYLE
Gao, L., Zhang, H., Gao, Z., Guan, W., Cheng, Z., & Wang, M. (2020). Texture Semantically Aligned with Visibility-aware for Partial Person Re-identification. In MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia (pp. 3771–3779). Association for Computing Machinery, Inc. https://doi.org/10.1145/3394171.3413833
Mendeley helps you to discover research relevant for your work.