In this paper, we propose a novel occlusion detection algorithm based on depth-layer information for robust visual tracking. The scene can be classified into the near, the target and the far layer. We find that when occlusion happens, some background patches in the near layer will move into the target region and hence occlude the target. Based on this feature of occlusion, we propose an algorithm which exploits both temporal and spatial context information to discriminate occlusion from target appearance variation. Using the framework of particle filter, our algorithm divides the background region around the target into multiple patches and tracks each of them. The background patch that occludes the target is identified collaboratively by the tracking results of both background and target trackers. Then the occlusion is evaluated with the target visibility function. If occlusion is detected, the target template stops updating. Comprehensive experiments in OTB-2013 and VOT-2015 show that our tracker achieves comparable performance with other state-of-art trackers.
CITATION STYLE
Niu, X., Cui, Z., Geng, S., Yang, J., & Qiao, Y. (2017). Robust Visual Tracking via Occlusion Detection Based on Depth-Layer Information. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10636 LNCS, pp. 44–53). Springer Verlag. https://doi.org/10.1007/978-3-319-70090-8_5
Mendeley helps you to discover research relevant for your work.