Decision controller for object tracking with deep reinforcement learning

14Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

There are many decisions which are usually made heuristically both in single object tracking (SOT) and multiple object tracking (MOT). Existing methods focus on tackling decision-making problems on special tasks in tracking without a unified framework. In this paper, we propose a decision controller (DC) which is generally applicable to both SOT and MOT tasks. The controller learns an optimal decision-making policy with a deep reinforcement learning algorithm that maximizes long term tracking performance without supervision. To prove the generalization ability of DC, we apply it to the challenging ensemble problem in SOT and tracker-detector switching problem in MOT. In the tracker ensemble experiment, our ensemble-based tracker can achieve leading performance in VOT2016 challenge and the light version can also get a state-of-the-art result at 50 FPS. In the MOT experiment, we utilize the tracker-detector switching controller to enable real-time online tracking with competitive performance and 10 × speed up.

Cite

CITATION STYLE

APA

Zhong, Z., Yang, Z., Feng, W., Wu, W., Hu, Y., & Liu, C. L. (2019). Decision controller for object tracking with deep reinforcement learning. IEEE Access, 7, 28069–28079. https://doi.org/10.1109/ACCESS.2019.2900476

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free