An incremental Bayesian model for learning syntactic categories

22Citations
Citations of this article
112Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present an incremental Bayesian model for the unsupervised learning of syntactic categories from raw text. The model draws information from the distributional cues of words within an utterance, while explicitly bootstrapping its development on its own partially-learned knowledge of syntactic categories. Testing our model on actual child-directed data, we demonstrate that it is robust to noise, learns reasonable categories, manages lexical ambiguity, and in general shows learning behaviours similar to those observed in children. © 2008.

Cite

CITATION STYLE

APA

Parisien, C., Fazly, A., & Stevenson, S. (2008). An incremental Bayesian model for learning syntactic categories. In CoNLL 2008 - Proceedings of the Twelfth Conference on Computational Natural Language Learning (pp. 89–96). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1596324.1596340

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free