Multiple features in temporal models for the representation of visual contents in video

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper analyzes different ways of coupling the information from multiple visual features in the representation of visual contents using temporal models based on Markov chains. We assume that the optimal combination is given by the Cartesian product of all feature state spaces. Simpler model structures are obtained by assuming independencies between random variables in the probabilistic structure. The relative entropy provides a measure of the information loss of a simplified structure with respect to a more complex one. The loss of information is then compared to the loss of accuracy in the representation of visual contents in video sequences, which is measured in terms of shot retrieval performance. We reach three main conclusions: (1) the full-coupled model structure is an accurate approximation to the Cartesian product structure, (2) the largest loss of information is found when direct temporal dependencies are removed, and (3) there is a direct relationship between loss of information and loss of representation accuracy. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Sánchez, J. M., Binefa, X., & Kender, J. R. (2003). Multiple features in temporal models for the representation of visual contents in video. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Springer Verlag. https://doi.org/10.1007/3-540-45113-7_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free