Classification of recorded musical instruments sounds based on neural networks

11Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural networks have found profound success in the area of pattern recognition. The purpose of this paper is to classify automatically musical instrument sounds on the basis of a limited number of parameters. And this involves issues like feature extraction and development of classifier using the obtained features. As for feature extraction, a 5 second audio file stored in WAVE format is passed to a feature extraction function. The feature extraction function calculates more than 20 numerical features both in time-domain and frequency-domain that characterize the sample. Regarding the task of classification, we designed a two-layer Feed-Forward Neural Network (FFNN) using back-propagation training algorithm. The FFNN is trained in a supervised manner - the weights are adjusted based on training samples (input-output pairs) that guide the optimization procedure towards an optimum. After training, the neural network is validated by analyzing its response to unknown data in order to evaluate its generalization capabilities. Then, the sequential forward selection method is adopted to choose the best feature set to achieve high classification accuracy. Our goal is mainly to classify the sound into five different musical instrument families, such as the Strings, the Woodwinds and the Brass. © 2007 IEEE.

Cite

CITATION STYLE

APA

Qian, D., & Nian, Z. (2007). Classification of recorded musical instruments sounds based on neural networks. In Proceedings of the 2007 IEEE Symposium on Computational Intelligence in Image and Signal Processing, CIISP 2007 (pp. 157–162). https://doi.org/10.1109/CIISP.2007.369310

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free