In order to simultaneously translate speech into multiple languages an extension of stochastic finite-state transducers is proposed. In this approach the speech translation model consists of a single network where acoustic models (in the input) and the multilingual model (in the output) are embedded. The multi-target model has been evaluated in a practical situation, and the results have been compared with those obtained using several mono-target models. Experimental results show that the multi-target one requires less amount of memory. In addition, a single decoding is enough to get the speech translated into multiple languages.
CITATION STYLE
Pérez, A., Torres, M. I., González, M. T., & Casacuberta, F. (2007). Speech-input multi-target machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 56–63). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1626355.1626363
Mendeley helps you to discover research relevant for your work.