Robust visual tracking based on improved perceptual hashing for robot vision

5Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, perceptual hash codes are adopted as appearance models of objects for visual tracking. Based on three existing basic perceptual hashing techniques, we propose Laplace-based hash (LHash) and Laplace-based difference hash (LDHash) to efficiently and robustly track objects in challenging video sequences. By qualitative and quantitative comparison with previous representative tracking methods such as mean-shift and compressive tracking, experimental results show perceptual hashing-based tracking outperforms and the newly proposed two algorithms perform the best under various challenging environments in terms of efficiency, accuracy and robustness. Especially, they can overcome severe challenges such as illumination changes, motion blur and pose variation.

Cite

CITATION STYLE

APA

Fei, M., Li, J., Shao, L., Ju, Z., & Ouyang, G. (2015). Robust visual tracking based on improved perceptual hashing for robot vision. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9246, pp. 331–340). Springer Verlag. https://doi.org/10.1007/978-3-319-22873-0_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free