Temporal-spatial refinements for video concept fusion

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The context-based concept fusion (CBCF) is increasingly used in video semantic indexing, which uses various relations among different concepts to refine the original detection results. In this paper, we present a CBCF method called Temporal-Spatial Node Balance algorithm (TSNB). This method is based on a physical model, in which the concepts are regard as nodes and the relations are regard as forces. Then all the spatial and temporal relations and the moving cost of the nodes will be balanced. This method is intuitive and observable to explain a concept how to influence others or be influenced by others. And it uses both the spatial and temporal information to describe the semantic structure of the video. We use TSNB algorithm on the datasets of TRECVid 2005-2010. The results show that this method outperforms all the existed works as we know. Besides, it is faster. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Geng, J., Miao, Z., & Chi, H. (2013). Temporal-spatial refinements for video concept fusion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7726 LNCS, pp. 547–559). https://doi.org/10.1007/978-3-642-37431-9_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free