Person re-identification (ReID) aims to match people across multiple non-overlapping video cameras deployed at different locations. To address this challenging problem, many metric learning approaches have been proposed, among which triplet loss is one of the state-of-the-arts. In this work, we explore the margin between positive and negative pairs of triplets and prove that large margin is beneficial. In particular, we propose a novel multi-stage training strategy which learns incremental triplet margin and improves triplet loss effectively. Multiple levels of feature maps are exploited to make the learned features more discriminative. Besides, we introduce global hard identity searching method to sample hard identities when generating a training batch. Extensive experiments on Market-1501, CUHK03, and DukeMTMC-reID show that our approach yields a performance boost and outperforms most existing state-of-the-art methods.
CITATION STYLE
Zhang, Y., Zhong, Q., Ma, L., Xie, D., & Pu, S. (2019). Learning incremental triplet margin for person re-identification. In 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019 (pp. 9243–9250). AAAI Press. https://doi.org/10.1609/aaai.v33i01.33019243
Mendeley helps you to discover research relevant for your work.