Distractor-Aware Visual Tracking by Online Siamese Network

20Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The idea of most trackers based on Siamese network is off-line training and online tracking. In fact, online tracking is conducted in terms of deep features, which are extracted from the predefined network trained on a large amount of data off-line. However, these features are the general representation for similar objects, and therefore, their discrimination ability is not enough to identify the current tracking target, particularly distractors, from the background. To tackle this problem, we propose to update the features extracted by a Siamese network online. These features can fit the target variations when tracking is on-the-fly. Especially, we extract the common features from the shallow convolutional layers trained off-line, and then, they are employed as inputs of the deep convolutional layers to learn the special features of the current target online. Besides, an integrated updating strategy is proposed to accelerate network convergence. It is beneficial to enhance the discrimination ability of the learned features to identify the current target from the background and distractors. We conducted abundant experiments on the OTB2015 and VOT2016 databases. And the results demonstrate that our tracker effectively improves the baseline algorithm and performs favorably against most of the state-of-the-art trackers in the comparison of accuracy and robustness.

Cite

CITATION STYLE

APA

Zha, Y., Wu, M., Qiu, Z., Dong, S., Yang, F., & Zhang, P. (2019). Distractor-Aware Visual Tracking by Online Siamese Network. IEEE Access, 7, 89777–89788. https://doi.org/10.1109/ACCESS.2019.2927211

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free