Video retrieval by context-based interpretation of time-to-collision descriptors

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Video retrieval using high-level indices is more meaningful than querying using low-level features. In this paper, we show how perceptual features such as time-to-collision (TTC) can lead to several high-level categories. Experiments have been conducted to validate our proposed TTC detection algorithm to compute TTC from the divergence of the image velocity field. A simple and novel method named as the pilot cue is used to further refine our algorithm. Our initial system works with a rule-based approach where the extracted TTC shots (low-level feature) are mapped to their corresponding high-level indices. The information conveyed by their neighboring frames or shots (i.e. contextual information) is used to facilitate the mapping process. Several psychological effects (high-level indices) such as intimacy, suspense and terror are recovered as a result. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Mittal, A., & Sung, W. K. (2003). Video retrieval by context-based interpretation of time-to-collision descriptors. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2756, 206–213. https://doi.org/10.1007/978-3-540-45179-2_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free