We propose a novel approach to detect semantic regions (pure vocals, pure instrumental and instrumental mixed vocals) in acoustic music signals. The acoustic music signal is first segmented at the beat level based on our proposed rhythm tracking algorithm. Then for each segment Cepstral coefficients are extracted from the Octave Scale to characterize music content. Finally, a hierarchical classification method is proposed to detect semantic regions. Different from previous methods, our proposed approach fully considers the music knowledge in segmenting and detecting the semantic regions in music signals. Experimental results illustrate that over 80% accuracy is achieved for semantic region detection. © Springer-Verlag Berlin Heidelberg 2004.
CITATION STYLE
Maddage, N. C., Xu, C., Shenoy, A., & Wang, Y. (2004). Semantic region detection in acoustic music signals. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3332, 874–881. https://doi.org/10.1007/978-3-540-30542-2_108
Mendeley helps you to discover research relevant for your work.