Flexible querying using structural and event based multimodal video data model

4Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Investments on multimedia technology enable us to store many more reflections of the real world in digital world as videos so that we carry a lot of information to the digital world directly. In order to store and efficiently query this information, a video database system (VDBS) is necessary. We propose a structural, event based and multimodal (SEBM) video data model which supports three different modalities that are visual, auditory and textual modalities for VDBSs and we can dissolve these three modalities within a single SEBM model. We answer the content-based, spatio-temporal and fuzzy queries of the user by using SEBM video data model more easily, since SEBM stores the video data as the way that user interprets the real world data. We follow divide and conquer technique when answering very complicated queries. We give the algorithms for querying on SEBM and try them on an implemented SEBM prototype system. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Öztarak, H., & Yazici, A. (2006). Flexible querying using structural and event based multimodal video data model. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4027 LNAI, pp. 75–86). Springer Verlag. https://doi.org/10.1007/11766254_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free