Extracting semantic information from basketball video based on audio-visual features

7Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a mechanism for extracting semantic information from basketball video sequence using audio and video features. After we divide the input video into shots by a simple cut detection algorithm using visual information, we analyze audio signal data to predict the location of an important event from which a cheering sound happens to start using the combination of MFCC features and the LPC entropy. Finally, we extract semantics about class of shot by computer vision techniques such as basketball tracking and related objects detection. Experimental results show that the proposed scheme can concretely extract semantics from basketball video data as compared to the existing methods.

Cite

CITATION STYLE

APA

Kim, K., Choi, J., Kim, N., & Kim, P. (2002). Extracting semantic information from basketball video based on audio-visual features. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2383, pp. 278–288). Springer Verlag. https://doi.org/10.1007/3-540-45479-9_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free