A self-referential perceptual inference framework for video interpretation

11Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an extensible architectural model for general content-based analysis and indexing of video data which can be customised for a given problem domain. Video interpretation is approached as a joint inference problems which can be solved through the use of modern machine learning and probabilistic inference techniques. An important aspect of the work concerns the use of a novel active knowledge representation methodology based on an ontological query language. This representation allows one to pose the problem of video analysis in terms of queries expressed in a visual language incorporating prior hierarchical knowledge of the syntactic and semantic structure of entities, relationships, and events of interest occurring in a video sequence. Perceptual inference then takes place within an ontological domain defined by the structure of the problem and the current goal set. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Town, C., & Sinclair, D. (2003). A self-referential perceptual inference framework for video interpretation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2626, pp. 54–67). Springer Verlag. https://doi.org/10.1007/3-540-36592-3_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free