Weakly-supervised Bayesian learning of a CCG supertagger

4Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.

Abstract

We present a Bayesian formulation for weakly-supervised learning of a Combinatory Categorial Grammar (CCG) supertagger with an HMM. We assume supervision in the form of a tag dictionary, and our prior encourages the use of cross-linguistically common category structures as well as transitions between tags that can combine locally according to CCG’s combinators. Our prior is theoretically appealing since it is motivated by language-independent, universal properties of the CCG formalism. Empirically, we show that it yields substantial improvements over previous work that used similar biases to initialize an EM-based learner. Additional gains are obtained by further shaping the prior with corpus-specific information that is extracted automatically from raw text and a tag dictionary.

Cite

CITATION STYLE

APA

Garrette, D., Dyer, C., Baldridge, J., & Smith, N. A. (2014). Weakly-supervised Bayesian learning of a CCG supertagger. In CoNLL 2014 - 18th Conference on Computational Natural Language Learning, Proceedings (pp. 141–150). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/w14-1615

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free