Towards a large scale concept ontology for broadcast video

14Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Earlier this year, a major effort was initiated to study the theoretical and empirical aspects of the automatic detection of semantic concepts in broadcast video, complementing ongoing research in video analysis, the TRECVID video analysis evaluations by the National Institute of Standards (NIST) in the U.S., and MPEG-7 standardization. The video analysis community has long struggled to bridge the gap from successful, low-level feature analysis (color histograms, texture, shape) to semantic content description of video. One approach is to utilize a set of intermediate textual descriptors that can be reliably applied to visual scenes (e.g. outdoors, faces, animals). If we can define a rich enough set of such intermediate descriptors in the form of large lexicons and taxonomic classification schemes, then robust and general-purpose semantic content annotation and retrieval will be enabled through these descriptors.

Cite

CITATION STYLE

APA

Hauptmann, A. G. (2004). Towards a large scale concept ontology for broadcast video. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3115, pp. 674–675). Springer Verlag. https://doi.org/10.1007/978-3-540-27814-6_78

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free