Video-based person re-identification (re-id) is a very challenging problem because of the occlusion and the changes of viewpoints, pedestrian postures and illumination. Most existing methods of video-based person re-id usually concatenates the extracted appearance and space-time features directly, which do not consider the discrepancy of different features fully. In order to deal with this problem well, we propose a simple but effective method based on feature learning of valid regions and distance fusion, which combines three distances. The first distance is local distance of valid regions calculated using the Gaussian of Gaussian(GOG) feature. Pedestrian images are divided in horizontal directions, and then regions with smaller distances are selected as valid ones to be reserved and regions with larger distances are as invalid ones to be removed due to occlusions and posture changes. The other two distances are obtained by independent metric learning using the histogram of oriented gradient 3D (HOG3D) feature and the Local Maximal Occurrence (LOMO) feature. The three distances are added as the final distance between pedestrian pair, so the matching rank of gallery is obtained. Extensive experiments are performed on the iLIDS-VID and PRID-2011 datasets, and the results prove the effectiveness of our methods.
CITATION STYLE
Yang, D., Qi, M., Wu, J., & Jiang, J. (2019). Video-based Person Re-identification Based on Feature Learning of Valid Regions and Distance Fusion. In Journal of Physics: Conference Series (Vol. 1229). Institute of Physics Publishing. https://doi.org/10.1088/1742-6596/1229/1/012009
Mendeley helps you to discover research relevant for your work.