Abstract
This paper shows that discriminative reranking with an averaged perceptron model yields substantial improvements in realization quality with CCG. The paper confirms the utility of including language model log probabilities as features in the model, which prior work on discriminative training with log linear models for HPSG realization had called into question. The perceptron model allows the combination of multiple n-gram models to be optimized and then augmented with both syntactic features and discriminative n-gram features. The full model yields a state-of-the-art BLEU score of 0.8506 on Section 23 of the CCGbank, to our knowledge the best score reported to date using a reversible, corpus-engineered grammar. © 2009 ACL and AFNLP.
Cite
CITATION STYLE
White, M., & Rajkumar, R. (2009). Perceptron reranking for CCG realization. In EMNLP 2009 - Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: A Meeting of SIGDAT, a Special Interest Group of ACL, Held in Conjunction with ACL-IJCNLP 2009 (pp. 410–419). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1699510.1699564
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.