Inference of stochastic finite-state transducers using N-gram mixtures

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Statistical pattern recognition has proved to be an interesting framework for machine translation, and stochastic finite-state transducers are adequate models in many language processing areas such as speech translation, computer-assisted translations, etc. The well-known n-gram language models are widely used in this framework for machine translation. One of the application of these n-gram models is to infer stochastic finite-state transducers. However, only simple dependencies can be modelled, but many translations require to take into account strong context and style dependencies. Mixtures of parametric models allow to increase the description power of the statistical models by modelling subclasses of objects. In this work, we propose the use of n-gram mixtures in GIATI, a procedure to infer stochastic finite-state transducers. N-gram mixtures are expected to model topics or writing styles. We present experimental results showing that translation performance can be improved if enough training data is available. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Alabau, V., Casacuberta, F., Vidal, E., & Juan, A. (2007). Inference of stochastic finite-state transducers using N-gram mixtures. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4478 LNCS, pp. 282–289). Springer Verlag. https://doi.org/10.1007/978-3-540-72849-8_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free