Convolutional neural network based audio event classification

45Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

Abstract

This paper proposes an audio event classification method based on convolutional neural networks (CNNs). CNN has great advantages of distinguishing complex shapes of image. Proposed system uses the features of audio sound as an input image of CNN. Mel scale filter bank features are extracted from each frame, then the features are concatenated over 40 consecutive frames and as a result, the concatenated frames are regarded as an input image. The output layer of CNN generates probabilities of audio event (e.g. dogs bark, siren, forest). The event probabilities for all images in an audio segment are accumulated, then the audio event having the highest accumulated probability is determined to be the classification result. This proposed method classified thirty audio events with the accuracy of 81.5% for the UrbanSound8K, BBC Sound FX, DCASE2016, and FREESOUND dataset.

Cite

CITATION STYLE

APA

Lim, M., Lee, D., Park, H., Kang, Y., Oh, J., Park, J. S., … Kim, J. H. (2018). Convolutional neural network based audio event classification. KSII Transactions on Internet and Information Systems, 12(6), 2748–2760. https://doi.org/10.3837/tiis.2018.06.017

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free