A generic framework for semantic video indexing based on visual concepts/contexts detection

15Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Providing a semantic access to video data requires the development of concept detectors. However, semantic concepts detection is a hard task due to the large intra-class and the small inter-class variability of content. Moreover, semantic concepts co-occur together in various contexts and their occurrence may vary from one to another. Thus, it is interesting to exploit this knowledge in order to achieve satisfactory performances. In this paper we present a generic semantic video indexing scheme, called SVI_REGIMVid. It is based on three levels of analysis. The first level (level1) focuses on low-level processing such as video shot boundary/key-frame detection, annotation tools, key-points detection and visual features extraction tools. The second level (level2) aims to build the semantic models for supervised learning of concepts/contexts. The third level (level3) enriches the semantic interpretation of concepts/contexts by exploiting fuzzy knowledge. The obtained experimental results are promising for a semantic concept/context detection process.

Cite

CITATION STYLE

APA

Elleuch, N., Ben Ammar, A., & Alimi, A. M. (2015). A generic framework for semantic video indexing based on visual concepts/contexts detection. Multimedia Tools and Applications, 74(4), 1397–1421. https://doi.org/10.1007/s11042-014-1955-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free