Audio songs classification based on music patterns

5Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this work, effort has been made to classify audio songs based on their music pattern which helps us to retrieve the music clips based on listener’s taste. This task is helpful in indexing and accessing the music clip based on listener’s state. Seven main categories are considered for this work such as devotional, energetic, folk, happy, pleasant, sad and, sleepy. Forty music clips of each category for training phase and fifteen clips of each category for testing phase are considered; vibrato-related features such as jitter and shimmer along with the mel-frequency cepstral coefficients (MFCCs); statistical values of pitch such as min, max, mean, and standard deviation are computed and added to the MFCCs, jitter, and shimmer which results in a 19-dimensional feature vector. feedforward backpropagation neural network (BPNN) is used as a classifier due to its efficiency in mapping the nonlinear relations. The accuracy of 82% is achieved on an average for 105 testing clips.

Cite

CITATION STYLE

APA

Sharma, R., Srinivasa Murthy, Y. V., & Koolagudi, S. G. (2016). Audio songs classification based on music patterns. In Advances in Intelligent Systems and Computing (Vol. 381, pp. 157–166). Springer Verlag. https://doi.org/10.1007/978-81-322-2526-3_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free