Unsupervised video shot segmentation using global color and texture information

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an effective algorithm to segment color video into shots for video indexing or retrieval applications. This work adds global texture information to our previous work, which extended the scale-invariant feature transform (SIFT) to color global texture SIFT (CGSIFT). Fibonacci lattice-quantization is used to quantize the image and extract five color features for each region of the image using a symmetrical template. Then, in each region of the image partitioned by the template, the entropy and energy of a cooccurrence matrix are calculated as the texture features. With these global color and texture features, we adopt clustering ensembles to segment video shots. Experimental results show that the additional texture features allow the proposed CGTSIFT algorithm to outperform our previous work, fuzzy-c means, and SOM-based shot detection methods. © Springer-Verlag Berlin Heidelberg 2008.

Cite

CITATION STYLE

APA

Chang, Y., Lee, D. J., Hong, Y., & Archibald, J. (2008). Unsupervised video shot segmentation using global color and texture information. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5358 LNCS, pp. 460–467). https://doi.org/10.1007/978-3-540-89639-5_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free