Abstract
In recent years, using Siamese network (SiamN) for visual tracking has witnessed a great success in terms of accuracy and efficiency. Nevertheless, most SiamN-based trackers employ shallow network such as AlexNet to extract the top-layer features as target representation that are less discriminative, usually leading to tracking performance degeneration when suffering from large deformation and similar distractors. A straightforward idea to address this issue is to replace the backbone network of SiamN with deeper ResNet. However, this cannot boost performance much due to the low resolution of highlevel feature maps with useful spatial details losing. To address this issue, the authors propose a lightweight yet effective feature agglomeration module (FAM) to adaptively fuse low-level and high-level features for robust tracking. Specifically, they first develop a generalised non-local attention module to enhance the discriminative capability of high-level semantic features. Then, they design an inception-like module to enhance the representative power of low-level features with more spatial details. Both types of features are then adaptively fused in the FAM to complement their characteristics. Extensive evaluations on OTB-2015 and VOT2017 challenge demonstrate that the proposed tracker consistently achieves favourable performance against several state-of-the-art trackers and runs at 50 fps.
Cite
CITATION STYLE
Yang, K., Song, H., Zhang, K., & Fan, J. (2019). Deeper Siamese network with multi-level feature fusion for real-time visual tracking. Electronics Letters, 55(13), 742–745. https://doi.org/10.1049/el.2019.1041
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.