Lightweight and Deep Appearance Embedding for Multiple Object Tracking

6Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The main challenge of Multiple Object Tracking (MOT) is that there is great uncertainty in data association when using the tracked predicted values and tracked trajectories. Meanwhile, the MOT is complex and time-consuming. When equipment resources are limited or real-time requirement is high, its application is very limited. Therefore, we propose a Lightweight Deep Appearance Embedding (LDAE) to assist the association of trajectories. Firstly, in addition to motion information in data association, we also introduce more discriminative appearance features to participate in the affinity measure to effectively distinguish similar targets. Secondly, according to the idea of feature mapping, we design a lightweight deep appearance embedding module. It can help extract appearance features with less computation. Finally, we propose a simulated occlusion strategy for the training of the LDAE, which helps improve the ability to recognise different targets in dense scenes. The LDAE dramatically reduces the computational cost and improves the accuracy of data association. Extensive experiments are conducted on the MOT datasets (MOT16, MOT17 and MOT20), which prove that the LDAE outperforms several state-of-the-art trackers in the tracking accuracy and anti-occlusion performance. Furthermore, we apply the LDAE to escalators, which can achieve fast and stable tracking effect.

Cite

CITATION STYLE

APA

Ye, L., Li, W., Zheng, L., & Zeng, Y. (2022). Lightweight and Deep Appearance Embedding for Multiple Object Tracking. IET Computer Vision, 16(6), 489–503. https://doi.org/10.1049/cvi2.12106

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free