Adaptive correlation model for visual tracking using keypoints matching and deep convolutional feature

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Although correlation filter (CF)-based visual tracking algorithms have achieved appealing results, there are still some problems to be solved. When the target object goes through long-term occlusions or scale variation, the correlation model used in existing CF-based algorithms will inevitably learn some non-target information or partial-target information. In order to avoid model contamination and enhance the adaptability of model updating, we introduce the keypoints matching strategy and adjust the model learning rate dynamically according to the matching score. Moreover, the proposed approach extracts convolutional features from a deep convolutional neural network (DCNN) to accurately estimate the position and scale of the target. Experimental results demonstrate that the proposed tracker has achieved satisfactory performance in a wide range of challenging tracking scenarios.

Cite

CITATION STYLE

APA

Li, Y., Xu, T., Deng, H., Shi, G., & Guo, J. (2018). Adaptive correlation model for visual tracking using keypoints matching and deep convolutional feature. Sensors (Switzerland), 18(2). https://doi.org/10.3390/s18020653

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free