Discriminative training and maximum entropy models for statistical machine translation

791Citations
Citations of this article
336Readers
Mendeley users who have this article in their library.

Abstract

We present a framework for statistical machine translation of natural languages based on direct maximum entropy models, which contains the widely used source-channel approach as a special case. All knowledge sources are treated as feature functions, which depend on the source language sentence, the target language sentence and possible hidden variables. This approach allows a baseline machine translation system to be extended easily by adding new feature functions. We show that a baseline statistical machine translation system is significantly improved using this approach.

Cite

CITATION STYLE

APA

Och, F. J., & Ney, H. (2002). Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2002-July, pp. 295–302). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1073083.1073133

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free