Video event retrieval from a small number of examples using rough set theory

4Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we develop an example-based event retrieval method which constructs a model for retrieving events of interest in a video archive, by using examples provided by a user. But, this is challenging because shots of an event are characterized by significantly different features, due to camera techniques, settings and so on. That is, the video archive contains a large variety of shots of the event, while the user can only provide a small number of examples. Considering this, we use "rough set theory" to capture various characteristics of the event. Specifically, by using rough set theory, we can extract classification rules which can correctly identify different subsets of positive examples. Furthermore, in order to extract a larger variety of classification rules, we incorporate "bagging" and "random subspace method" into rough set theory. Here, we define indiscernibility relations among examples based on outputs of classifiers, built on different subsets of examples and different subsets of feature dimensions. Experimental results on TRECVID 2009 video data validate the effectiveness of our example-based event retrieval method. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Shirahama, K., Matsuoka, Y., & Uehara, K. (2011). Video event retrieval from a small number of examples using rough set theory. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6523 LNCS, pp. 96–106). https://doi.org/10.1007/978-3-642-17832-0_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free