Abstract
We present a version of Inversion Transduction Grammar where rule probabilities are lexicalized throughout the synchronous parse tree, along with pruning techniques for efficient training. Alignment results improve over unlexicalized ITG on short sentences for which full EM is feasible, but pruning seems to have a negative impact on longer sentences. © 2005 Association for Computational Linguistics.
Cite
CITATION STYLE
Zhang, H., & Gildea, D. (2005). Stochastic lexicalized Inversion Transduction Grammar for Alignment. In ACL-05 - 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 475–482). https://doi.org/10.3115/1219840.1219899
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.