Speech-input multi-target machine translation

2Citations
Citations of this article
76Readers
Mendeley users who have this article in their library.

Abstract

In order to simultaneously translate speech into multiple languages an extension of stochastic finite-state transducers is proposed. In this approach the speech translation model consists of a single network where acoustic models (in the input) and the multilingual model (in the output) are embedded. The multi-target model has been evaluated in a practical situation, and the results have been compared with those obtained using several mono-target models. Experimental results show that the multi-target one requires less amount of memory. In addition, a single decoding is enough to get the speech translated into multiple languages.

Cite

CITATION STYLE

APA

Pérez, A., Torres, M. I., González, M. T., & Casacuberta, F. (2007). Speech-input multi-target machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 56–63). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1626355.1626363

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free