Fast appearance variations and the distractions of similar objects are two of the most challenging problems in visual object tracking. Unlike many existing trackers that focus on modeling only the target, in this work, we consider the transient variations of the whole scene. The key insight is that the object correspondence and spatial layout of the whole scene are consistent (i.e., global structure consistency) in consecutive frames which helps to disambiguate the target from distractors. Moreover, modeling transient variations enables to localize the target under fast variations. Specifically, we propose an effective and efficient short-term model that learns to exploit the global structure consistency in a short time and thus can handle fast variations and distractors. Since short-term modeling falls short of handling occlusion and out of the views, we adopt the long-short term paradigm and use a long-term model that corrects the short-term model when it drifts away from the target or the target is not present. These two components are carefully combined to achieve the balance of stability and plasticity during tracking. We empirically verify that the proposed tracker can tackle the two challenging scenarios and validate it on large scale benchmarks. Remarkably, our tracker improves state-of-the-art-performance on VOT2018 from 0.440 to 0.460, GOT-10k from 0.611 to 0.640, and NFS from 0.619 to 0.629.
CITATION STYLE
Li, B., Zhang, C., Hong, Z., Tang, X., Liu, J., Han, J., … Liu, W. (2020). Learning Global Structure Consistency for Robust Object Tracking. In MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia (pp. 229–237). Association for Computing Machinery, Inc. https://doi.org/10.1145/3394171.3413644
Mendeley helps you to discover research relevant for your work.