One perceptron to rule them all: Language, vision, audio and speech

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep neural networks have boosted the convergence of multimedia data analytics in a unified framework shared by practitioners in natural language, vision and speech. Image captioning, lip reading or video sonorization are some of the first applications of a new and exciting field of research exploiting the generalization properties of deep neural representation. This tutorial will firstly review the basic neural architectures to encode and decode vision, text and audio, to later review the those models that have successfully translated information across modalities.

Cite

CITATION STYLE

APA

Giro-I-Nieto, X. (2020). One perceptron to rule them all: Language, vision, audio and speech. In ICMR 2020 - Proceedings of the 2020 International Conference on Multimedia Retrieval (pp. 7–8). Association for Computing Machinery. https://doi.org/10.1145/3372278.3390740

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free