Semantic-level content analysis is a crucial issue in achieving efficient content retrieval and management. We propose a hierarchical approach that models audio events over a time series in order to accomplish semantic context detection. Two levels of modeling, audio event and semantic context modeling, are devised to bridge the gap between physical audio features and semantic concepts. In this work, hidden Markov models (HMMs) are used to model four representative audio events, that is, gunshot, explosion, engine, and car braking, in action movies. At the semantic context level, generative (ergodic hidden Markov model) and discriminative (support vector machine (SVM)) approaches are investigated to fuse the characteristics and correlations among audio events, which provide cues for detecting gunplay and car-chasing scenes. The experimental results demonstrate the effectiveness of the proposed approaches and provide a preliminary framework for information mining by using audio characteristics. Copyright © 2006 Hindawi Publishing Corporation. All rights reserved.
CITATION STYLE
Chu, W. T., Cheng, W. H., & Wu, J. L. (2006). Semantic context detection using audio event fusion: Camera-ready version. Eurasip Journal on Applied Signal Processing, 2006, 1–12. https://doi.org/10.1155/ASP/2006/27390
Mendeley helps you to discover research relevant for your work.