Residual attention convolutional network for online visual tracking

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Discriminative correlation filters (DCFs) have received much attention in visual tracking due to their high performance but suffered from unwanted boundary effects. Convolutional regression tracking reformulates the DCFs as a one layer convolutional network and avoids the boundary effects. However, this single convolutional network based algorithms' performance has been drastically limited by over fitness caused by data imbalance. In this paper, we propose a residual attention module to the one layer convolutional network to inhibit the descent of discriminative ability caused by over fitness. A bottom-up and top-down fully convolutional structure is designed in the residual attention module to form samples with bigger receptive field. After that, two types of activation function are applied to capture spatial attention and time attention. By combining the two types of attention, the residual attention can highlight the object and diminish the background response. We perform extensive experiments on two widely used datasets, namely, OTB-2013 and OTB-2015 and the results show that the proposed algorithm achieves favorable performance compared with the state-of-art trackers.

Cite

CITATION STYLE

APA

Gao, L., Li, Y., & Ning, J. (2019). Residual attention convolutional network for online visual tracking. IEEE Access, 7, 94097–94105. https://doi.org/10.1109/ACCESS.2019.2927791

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free