Time–Frequency Feature Fusion for Noise Robust Audio Event Classification

21Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper explores the use of three different two-dimensional time–frequency features for audio event classification with deep neural network back-end classifiers. The evaluations use spectrogram, cochleogram and constant-Q transform-based images for classification of 50 classes of audio events in varying levels of acoustic background noise, revealing interesting performance patterns with respect to noise level, feature image type and classifier. Evidence is obtained that two well-performing features, the spectrogram and cochleogram, make use of information that is potentially complementary in the input features. Feature fusion is thus explored for each pair of features, as well as for all tested features. Results indicate that a fusion of spectrogram and cochleogram information is particularly beneficial, yielding an impressive 50-class accuracy of over 96 % in 0 dB SNR and exceeding 99 % accuracy in 10 dB SNR and above. Meanwhile, the cochleogram image feature is found to perform well in extreme noise cases of -5 dB and -10 dB SNR.

Cite

CITATION STYLE

APA

McLoughlin, I., Xie, Z., Song, Y., Phan, H., & Palaniappan, R. (2020). Time–Frequency Feature Fusion for Noise Robust Audio Event Classification. Circuits, Systems, and Signal Processing, 39(3), 1672–1687. https://doi.org/10.1007/s00034-019-01203-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free