Learning neural audio embeddings for grounding semantics in auditory perception

18Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

Multi-modal semantics, which aims to ground semantic representations in perception, has relied on feature norms or raw image data for perceptual input. In this paper we examine grounding semantic representations in raw auditory data, using standard evaluations for multi-modal semantics. After having shown the quality of such auditorily grounded representations, we show how they can be applied to tasks where auditory perception is relevant, including two unsupervised categorization experiments, and provide further analysis. We find that features transfered from deep neural networks outperform bag of audio words approaches. To our knowledge, this is the first work to construct multi-modal models from a combination of textual information and auditory information extracted from deep neural networks, and the first work to evaluate the performance of tri-modal (textual, visual and auditory) semantic models.

Cite

CITATION STYLE

APA

Kiela, D., & Clark, S. (2017). Learning neural audio embeddings for grounding semantics in auditory perception. Journal of Artificial Intelligence Research, 60, 1003–1030. https://doi.org/10.1613/jair.5665

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free