Object tracking over multiple uncalibrated cameras using visual, spatial and temporal similarities

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Developing a practical multi-camera tracking solution for autonomous camera networks is a very challenging task, due to numerous constraints such as limited memory and processing power, heterogeneous visual characteristics of objects between camera views, and limited setup time and installation knowledge for camera calibration. In this paper, we propose a unified multi-camera tracking framework, which can run online in real-time and can handle both independent field of view and common field of view cases. No camera calibration, knowledge of the relative positions of cameras, or entry and exit locations of objects is required. The memory footprint of the framework is minimised by the introduction of reusing kernels. The heterogeneous visual characteristics of objects are addressed by a novel location-based kernel matching method. The proposed framework has been evaluated using real videos captured in multiple indoor settings. The framework achieves efficient memory usage without compromising tracking accuracy. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Wedge, D., Scott, A. F., Ma, Z., & Vendrig, J. (2010). Object tracking over multiple uncalibrated cameras using visual, spatial and temporal similarities. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6475 LNCS, pp. 167–178). https://doi.org/10.1007/978-3-642-17691-3_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free