Object matching in distributed video surveillance systems by LDA-based appearance descriptors

12Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Establishing correspondences among object instances is still challenging in multi-camera surveillance systems, especially when the cameras' fields of view are non-overlapping. Spatiotemporal constraints can help in solving the correspondence problem but still leave a wide margin of uncertainty. One way to reduce this uncertainty is to use appearance information about the moving objects in the site. In this paper we present the preliminary results of a new method that can capture salient appearance characteristics at each camera node in the network. A Latent Dirichlet Allocation (LDA) model is created and maintained at each node in the camera network. Each object is encoded in terms of the LDA bag-of-words model for appearance. The encoded appearance is then used to establish probable matching across cameras. Preliminary experiments are conducted on a dataset of 20 individuals and comparison against Madden's I-MCHR is reported. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Lo Presti, L., Sclaroff, S., & La Cascia, M. (2009). Object matching in distributed video surveillance systems by LDA-based appearance descriptors. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5716 LNCS, pp. 547–557). https://doi.org/10.1007/978-3-642-04146-4_59

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free