A paradox of neural encoders and decoders or why don’t we talk backwards??

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a framework for studying the biases that recurrent neural networks bring to language processing tasks. A semantic concept represented by a point in Euclidean space is translated into a symbol sequence by an encoder network. This sequence is then presented to a decoder network which attempts to translate it back to the original concept. We show how a pair of recurrent networks acting as encoder and decoder can develop their own symbolic language that is serially transmitted between them either forwards or backwards. The encoder and decoder bring different constraints to the task, and these early results indicate that the conflicting nature of these constraints may be reflected in the language that ultimately emerges, providing clues to the structure of human languages.

Cite

CITATION STYLE

APA

Tonkes, B., Blair, A., & Wiles, J. (1999). A paradox of neural encoders and decoders or why don’t we talk backwards?? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1585, pp. 357–364). Springer Verlag. https://doi.org/10.1007/3-540-48873-1_46

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free