Perceptual narratives of space and motion for semantic interpretation of visual data

10Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose a commonsense theory of space and motion for the high-level semantic interpretation of dynamic scenes. The theory provides primitives for commonsense representation and reasoning with qualitative spatial relations, depth profiles, and spatio-temporal change; these may be combined with probabilistic methods for modelling and hypothesising event and object relations. The proposed framework has been implemented as a general activity abstraction and reasoning engine, which we demonstrate by generating declaratively grounded visuo-spatial narratives of perceptual input from vision and depth sensors for a benchmark scenario. Our long-term goal is to provide general tools (integrating different aspects of space, action, and change) necessary for tasks such as realtime human activity interpretation and dynamic sensor control within the purview of cognitive vision, interaction, and control.

Cite

CITATION STYLE

APA

Suchan, J., Bhatt, M., & Santos, P. E. (2015). Perceptual narratives of space and motion for semantic interpretation of visual data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8926, pp. 339–354). Springer Verlag. https://doi.org/10.1007/978-3-319-16181-5_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free