What do I see? Modeling human visual perception for multi-person tracking

2Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper presents a novel approach for multi-person tracking utilizing a model motivated by the human vision system. The model predicts human motion based on modeling of perceived information. An attention map is designed to mimic human reasoning that integrates both spatial and temporal information. The spatial component addresses human attention allocation to different areas in a scene and is represented using a retinal mapping based on the log-polar transformation while the temporal component denotes the human attention allocation to subjects with different motion velocity and is modeled as a static-dynamic attention map. With the static-dynamic attention map and retinal mapping, attention driven motion of the tracked target is estimated with a center-surround search mechanism. This perception based motion model is integrated into a data association tracking framework with appearance and motion features. The proposed algorithm tracks a large number of subjects in complex scenes and the evaluation on public datasets show promising improvements over state-of-the-art methods. © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Yan, X., Kakadiaris, I. A., & Shah, S. K. (2014). What do I see? Modeling human visual perception for multi-person tracking. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8690 LNCS, pp. 314–329). Springer Verlag. https://doi.org/10.1007/978-3-319-10605-2_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free