Weakly supervised supertagging with grammar-informed initialization

12Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.

Abstract

Much previous work has investigated weak supervision with HMMs and tag dictionaries for part-of-speech tagging, but there have been no similar investigations for the harder problem of supertagging. Here, I show that weak supervision for supertagging does work, but that it is subject to severe performance degradation when the tag dictionary is highly ambiguous. I show that lexical category complexity and information about how supertags may combine syntactically can be used to initialize the transition distributions of a first-order Hidden Markov Model for weakly supervised learning. This initialization proves more effective than starting with uniform transitions, especially when the tag dictionary is highly ambiguous. © 2008 Licensed under the Creative Commons.

Cite

CITATION STYLE

APA

Baldridge, J. (2008). Weakly supervised supertagging with grammar-informed initialization. In Coling 2008 - 22nd International Conference on Computational Linguistics, Proceedings of the Conference (Vol. 1, pp. 57–64). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1599081.1599089

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free