Robust Visual Tracking via Occlusion Detection Based on Depth-Layer Information

6Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a novel occlusion detection algorithm based on depth-layer information for robust visual tracking. The scene can be classified into the near, the target and the far layer. We find that when occlusion happens, some background patches in the near layer will move into the target region and hence occlude the target. Based on this feature of occlusion, we propose an algorithm which exploits both temporal and spatial context information to discriminate occlusion from target appearance variation. Using the framework of particle filter, our algorithm divides the background region around the target into multiple patches and tracks each of them. The background patch that occludes the target is identified collaboratively by the tracking results of both background and target trackers. Then the occlusion is evaluated with the target visibility function. If occlusion is detected, the target template stops updating. Comprehensive experiments in OTB-2013 and VOT-2015 show that our tracker achieves comparable performance with other state-of-art trackers.

Cite

CITATION STYLE

APA

Niu, X., Cui, Z., Geng, S., Yang, J., & Qiao, Y. (2017). Robust Visual Tracking via Occlusion Detection Based on Depth-Layer Information. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10636 LNCS, pp. 44–53). Springer Verlag. https://doi.org/10.1007/978-3-319-70090-8_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free