Learning representations of wordforms with recurrent networks: Comment on Sibley, Kello, Plaut, & Elman (2008)

10Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Sibley et al. (2008) report a recurrent neural network model designed to learn wordform representations suitable for written and spoken word identification. The authors claim that their sequence encoder network overcomes a key limitation associated with models that code letters by position (e.g., CAT might be coded as C-in-position-1, A-in-position-2, T-in-position-3). The problem with coding letters by position (slot-coding) is that it is difficult to generalize knowledge across positions; for example, the overlap between CAT and TOMCAT is lost. Although we agree this is a critical problem with many slot-coding schemes, we question whether the sequence encoder model addresses this limitation, and we highlight another deficiency of the model. We conclude that alternative theories are more promising. Copyright © 2009 Cognitive Science Society, Inc. All rights reserved.

Cite

CITATION STYLE

APA

Bowers, J. S., & Davis, C. J. (2009). Learning representations of wordforms with recurrent networks: Comment on Sibley, Kello, Plaut, & Elman (2008). Cognitive Science, 33(7), 1183–1186. https://doi.org/10.1111/j.1551-6709.2009.01062.x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free