DCCO: Towards deformable continuous convolution operators for visual tracking

10Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Discriminative Correlation Filter (DCF) based methods have shown competitive performance on tracking benchmarks in recent years. Generally, DCF based trackers learn a rigid appearance model of the target. However, this reliance on a single rigid appearance model is insufficient in situations where the target undergoes non-rigid transformations. In this paper, we propose a unified formulation for learning a deformable convolution filter. In our framework, the deformable filter is represented as a linear combination of sub-filters. Both the sub-filter coefficients and their relative locations are inferred jointly in our formulation. Experiments are performed on three challenging tracking benchmarks: OTB-2015, TempleColor and VOT2016. Our approach improves the baseline method, leading to performance comparable to state-of-the-art.

Author supplied keywords

Cite

CITATION STYLE

APA

Johnander, J., Danelljan, M., Khan, F. S., & Felsberg, M. (2017). DCCO: Towards deformable continuous convolution operators for visual tracking. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10424 LNCS, pp. 55–67). Springer Verlag. https://doi.org/10.1007/978-3-319-64689-3_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free