Learning nonadjacent dependencies with a recurrent neural network

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Human learners are known to exploit statistical dependencies of language elements such as syllables or words during acquisition and processing. Recent research suggests that underlying computations relate not only to adjacent but also to nonadjacent elements such as subject/verb agreement or tense marking in English. The latter type of computations is more difficult and appears to work under certain conditions, as formulated by the variability hypothesis. We model this finding using a simple recurrent network and show that higher variability of the intervening syllables facilitates the generalization in the continuous stream of 3-syllable words. We also test the network performance in case of more realistic, two intervening syllables and show that only a more complex training algorithm can lead to satisfactory learning of nonadjacent dependencies. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Farkaš, I. (2009). Learning nonadjacent dependencies with a recurrent neural network. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5507 LNCS, pp. 292–299). https://doi.org/10.1007/978-3-642-03040-6_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free