Context-dependent SMT model using bilingual verb-noun collocation

2Citations
Citations of this article
85Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we propose a new contextdependent SMT model that is tightly coupled with a language model. It is designed to decrease the translation ambiguities and efficiently search for an optimal hypothesis by reducing the hypothesis search space. It works through reciprocal incorporation between source and target context: a source word is determined by the context of previous and corresponding target words and the next target word is predicted by the pair consisting of the previous target word and its corresponding source word. In order to alleviate the data sparseness in chunk-based translation, we take a stepwise back-off translation strategy. Moreover, in order to obtain more semantically plausible translation results, we use bilingual verb-noun collocations; these are automatically extracted by using chunk alignment and a monolingual dependency parser. As a case study, we experimented on the language pair of Japanese and Korean. As a result, we could not only reduce the search space but also improve the performance. © 2005 Association for Computational Linguistics.

Cite

CITATION STYLE

APA

Hwang, Y. S., & Sasaki, Y. (2005). Context-dependent SMT model using bilingual verb-noun collocation. In ACL-05 - 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 549–556). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1219840.1219908

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free