Semantic region detection in acoustic music signals

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a novel approach to detect semantic regions (pure vocals, pure instrumental and instrumental mixed vocals) in acoustic music signals. The acoustic music signal is first segmented at the beat level based on our proposed rhythm tracking algorithm. Then for each segment Cepstral coefficients are extracted from the Octave Scale to characterize music content. Finally, a hierarchical classification method is proposed to detect semantic regions. Different from previous methods, our proposed approach fully considers the music knowledge in segmenting and detecting the semantic regions in music signals. Experimental results illustrate that over 80% accuracy is achieved for semantic region detection. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

Maddage, N. C., Xu, C., Shenoy, A., & Wang, Y. (2004). Semantic region detection in acoustic music signals. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3332, 874–881. https://doi.org/10.1007/978-3-540-30542-2_108

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free