Video registration using dynamic textures

5Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose a dynamic texture feature-based algorithm for registering two video sequences of a rigid or nonrigid scene taken from two synchronous or asynchronous cameras. We model each video sequence as the output of a linear dynamical system, and transform the task of registering frames of the two sequences to that of registering the parameters of the corresponding models. This allows us to perform registration using the more classical image-based features as opposed to space-time features, such as space-time volumes or feature trajectories. As the model parameters are not uniquely defined, we propose a generic method to resolve these ambiguities by jointly identifying the parameters from multiple video sequences. We finally test our algorithm on a wide variety of challenging video sequences and show that it matches the performance of significantly more computationally expensive existing methods. © 2008 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Ravichandran, A., & Vidal, R. (2008). Video registration using dynamic textures. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5303 LNCS, pp. 514–526). Springer Verlag. https://doi.org/10.1007/978-3-540-88688-4_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free