A word-to-word model of translational equivalence

73Citations
Citations of this article
100Readers
Mendeley users who have this article in their library.

Abstract

Many multilingual NLP applications need to translate words between different languages, but cannot afford the computational expense of inducing or applying a full translation model. For these applications, we have designed a fast algorithm for estimating a partial translation model, which accounts for translational equivalence only at the word level . The model's precision/recall trade-off can be directly controlled via one threshold parameter. This feature makes the model more suitable for applications that are not fully statistical. The model's hidden parameters can be easily conditioned on information extrinsic to the model, providing an easy way to integrate pre-existing knowledge such as part-of-speech, dictionaries, word order, etc.. Our model can link word tokens in parallel texts as well as other translation models in the literature. Unlike other translation models, it can automatically produce dictionary-sized translation lexicons, and it can do so with over 99% accuracy.

Cite

CITATION STYLE

APA

Melamed, I. D. (1997). A word-to-word model of translational equivalence. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1997-July, pp. 490–497). Association for Computational Linguistics (ACL). https://doi.org/10.7551/mitpress/2708.003.0011

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free