Automatically inducing the syntactic partof- speech categories for words in text is a fundamental task in Computational Linguistics. While the performance of unsupervised tagging models has been slowly improving, current state-of-the-Art systems make the obviously incorrect assumption that all tokens of a given word type must share a single part-of-speech tag. This one-tag-per-type heuristic counters the tendency of Hidden Markov Model based taggers to over generate tags for a given word type. However, it is clearly incompatible with basic syntactic theory. In this paper we extend a state-ofthe- Art Pitman-Yor Hidden Markov Model tagger with an explicit model of the lexicon. In doing so we are able to incorporate a soft bias towards inducing few tags per type. We develop a particle filter for drawing samples from the posterior of our model and present empirical results that show that our model is competitive with and faster than the state-of-the-Art without making any unrealistic restrictions. © 2014 Association for Computational Linguistics.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Dubbin, G., & Blunsom, P. (2014). Modelling the lexicon in unsupervised part of speech induction. In 14th Conference of the European Chapter of the Association for Computational Linguistics 2014, EACL 2014 (pp. 116–125). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/e14-1013