An entropy model for artificial grammar learning

22Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

Abstract

A model is proposed to characterize the type of knowledge acquired in artificial grammar learning (AGL). In particular, Shannon entropy is employed to compute the complexity of different test items in an AGL task, relative to the training items. According to this model, the more predictable a test item is from the training items, the more likely it is that this item should be selected as compatible with the training items. The predictions of the entropy model are explored in relation to the results from several previous AGL datasets and compared to other AGL measures. This particular approach in AGL resonates well with similar models in categorization and reasoning which also postulate that cognitive processing is geared towards the reduction of entropy. © 2010 Pothos.

Cite

CITATION STYLE

APA

Pothos, E. M. (2010). An entropy model for artificial grammar learning. Frontiers in Psychology, (JUN). https://doi.org/10.3389/fpsyg.2010.00016

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free