Improving the performance of acoustic event classification by selecting and combining information sources using the fuzzy integral

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Acoustic events produced in meeting-room-like environments may carry information useful for perceptually aware interfaces. In this paper, we focus on the problem of combining different information sources at different structural levels for classifying human vocal-tract non-speech sounds. The Fuzzy Integral (FI) approach is used to fuse outputs of several classification systems, and feature selection and ranking are carried out based on the knowledge extracted from the Fuzzy Measure (FM). In the experiments with a limited set of training data, the Fl-based decision-level fusion showed a classification performance which is much higher than the one from the best single classifier and can surpass the performance resulting from the integration at the feature-level by Support Vector Machines. Although only fusion of audio information sources is considered in this work, the conclusions may be extensible to the multi-modal case. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Temko, A., Macho, D., & Nadeu, C. (2006). Improving the performance of acoustic event classification by selecting and combining information sources using the fuzzy integral. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3869 LNCS, pp. 357–368). https://doi.org/10.1007/11677482_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free