Real-time ‘Actor-Critic’ tracking

38Citations
Citations of this article
121Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this work, we propose a novel tracking algorithm with real-time performance based on the ‘Actor-Critic’ framework. This framework consists of two major components: ‘Actor’ and ‘Critic’. The ‘Actor’ model aims to infer the optimal choice in a continuous action space, which directly makes the tracker move the bounding box to the object’s location in the current frame. For offline training, the ‘Critic’ model is introduced to form a ‘Actor-Critic’ framework with reinforcement learning and outputs a Q-value to guide the learning process of both ‘Actor’ and ‘Critic’ deep networks. Then, we modify the original deep deterministic policy gradient algorithm to effectively train our ‘Actor-Critic’ model for the tracking task. For online tracking, the ‘Actor’ model provides a dynamic search strategy to locate the tracked object efficiently and the ‘Critic’ model acts as a verification module to make our tracker more robust. To the best of our knowledge, this work is the first attempt to exploit the continuous action and ‘Actor-Critic’ framework for visual tracking. Extensive experimental results on popular benchmarks demonstrate that the proposed tracker performs favorably against many state-of-the-art methods, with real-time performance.

Cite

CITATION STYLE

APA

Chen, B., Wang, D., Li, P., Wang, S., & Lu, H. (2018). Real-time ‘Actor-Critic’ tracking. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11211 LNCS, pp. 328–345). Springer Verlag. https://doi.org/10.1007/978-3-030-01234-2_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free