State-of-the-art neural machine translation (NMT) systems are generally trained on specific domains by carefully selecting the training sets and applying proper domain adaptation techniques. In this paper we consider the real world scenario in which the target domain is not predefined, hence the system should be able to translate text from multiple domains. We compare the performance of a generic NMT system and phrase-based statistical machine translation (PBMT) system by training them on a generic parallel corpus composed of data from different domains. Our results on multi-domain English-French data show that, in these realistic conditions, PBMT outperforms its neural counterpart. This raises the question: is NMT ready for deployment as a generic/multi-purpose MT backbone in real-world settings?
CITATION STYLE
Amin Farajian, M., Turchi, M., Negri, M., Bertoldi, N., & Federico, M. (2017). Neural vs. phrase-based machine translation in a multi-domain scenario. In 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017 - Proceedings of Conference (Vol. 2, pp. 280–284). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/e17-2045
Mendeley helps you to discover research relevant for your work.