Context Adaptive Visual Tracker in Surveillance Networks

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

CNN-based visual trackers has been successfully applied to surveillance networks. Some trackers apply sliding-window method to generate candidate samples which is the input of network. However, some candidate samples containing too much background regions are mistakenly used for target tracking, which leads to a drift problem. To mitigate this problem, we propose a novel Context Adaptive Visual tracker (CAVT), which discards the patches containing too much background regions and constructs a robust appearance model of tracking targets. The proposed method first formulates a weighted similarity function to construct a pure target region. The pure target region and the surrounding area of the bounding box are used as a target prior and a background prior, respectively. Then the method exploits both the target prior and background prior to distinguish target and background regions from the bounding box. Experiments on a challenging benchmark OTB demonstrate that the proposed CAVT algorithm performs favorably compared to several state-of-the-art methods.

Author supplied keywords

Cite

CITATION STYLE

APA

Feng, W., Li, M., Zhou, Y., Li, Z., & Li, C. (2019). Context Adaptive Visual Tracker in Surveillance Networks. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST (Vol. 286, pp. 374–382). Springer Verlag. https://doi.org/10.1007/978-3-030-22968-9_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free