Unsupervised cross-modal audio representation learning from unstructured multilingual text

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present an approach to unsupervised audio representation learning. Based on a Triplet Neural Network architecture, we harnesses semantically related cross-modal information to estimate audio track-relatedness. By applying Latent Semantic Indexing (LSI) we embed corresponding textual information into a latent vector space from which we derive track relatedness for online triplet selection. This LSI topic modeling facilitates fine-grained selection of similar and dissimilar audio-track pairs to learn the audio representation using a Convolution Recurrent Neural Network (CRNN). By this we directly project the semantic context of the unstructured text modality onto the learned representation space of the audio modality without deriving structured ground truth annotations from it. We evaluate our approach on the Europeana Sounds collection and show how to improve search in digital audio libraries by harnessing the multilingual metadata provided by numerous European digital libraries. We show that our approach is invariant to the variety of annotation styles as well as to the different languages of this collection. The learned representations perform comparable to the baseline of handcrafted features, respectively exceeding this baseline in similarity retrieval precision at higher cut-offs with only 15% of the baseline's feature vector length.

Cite

CITATION STYLE

APA

Schindler, A., Gordea, S., & Knees, P. (2020). Unsupervised cross-modal audio representation learning from unstructured multilingual text. In Proceedings of the ACM Symposium on Applied Computing (pp. 706–713). Association for Computing Machinery. https://doi.org/10.1145/3341105.3374114

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free