This paper extends the traditional methodology of acoustic scene classification based on machine listening towards a new class of multichannel audio signals. It identifies a set of new features of five-channel surround recordings for classification of the two basic spatial audio scenes. Moreover, it compares the three artificial intelligence-based classification approaches to audio scene classification. The results indicate that the method based on the early fusion of features is superior compared to those involving the late fusion of signal metrics.
CITATION STYLE
Zieliński, S. K. (2018). Feature extraction of surround sound recordings for acoustic scene classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10842 LNAI, pp. 475–486). Springer Verlag. https://doi.org/10.1007/978-3-319-91262-2_43
Mendeley helps you to discover research relevant for your work.