Comparing visual feature coding for learning disjoint camera dependencies

3Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

This paper systematically investigates the effectiveness of different visual feature coding schemes for facilitating the learning of time-delayed dependencies among disjoint multi-camera views. Accurate inter-camera dependency estimation across nonoverlapping camera views is non-trivial especially in crowded scenes where inter-object occlusion can be severe and frequent, and when the degree of crowdedness can change drastically over time. In contrast to existing methods that learn dependencies between disjoint cameras by solely relying on correlating universal object-independent low-level visual features or transition time statistics, we propose to use either supervised or unsupervised feature coding, to establish a robust and reliable representation for estimating more accurately inter-camera activity pattern dependencies. We show comparative experiments to demonstrate the superiority of robust feature coding for learning inter-camera dependencies using benchmark multi-camera datasets of crowded public scenes.

Cite

CITATION STYLE

APA

Zhu, X., Gong, S., & Loy, C. C. (2012). Comparing visual feature coding for learning disjoint camera dependencies. In BMVC 2012 - Electronic Proceedings of the British Machine Vision Conference 2012. British Machine Vision Association, BMVA. https://doi.org/10.5244/C.26.94

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free