Language users in multilingual environments who are trying to make sense of the linguistic challenges they face may well regard the advent of online machine translation (MT) applications as a welcome intervention. Such applications have made it possible for virtually anyone to try their hand at translation – with minimum effort, at that. However, the usefulness of the output of these translation applications varies. The empirical research described in this article is a continuation of an investigation into the usefulness of MT in a higher education context. In 2010, Afrikaans and English translations generated by Google Translate and two human translators, based on the same set of source texts, were evaluated by a panel of raters by means of a holistic assessment tool. In 2011 and 2012, the same set of source texts was translated again with Google Translate, and those translations have since been evaluated in exactly the same manner. The results show that the quality of Google Translate's output has improved over the three years. Subsequently, an error analysis was performed on the translation set of one text type by means of a second assessment tool. Despite an overall improvement in quality, we found that the 2012 translation contained unexpected new errors. In addition, the error analysis showed that mistranslation posed the largest risk when using this MT application. Users of MT should, therefore, understand the risks of their choice and that some text types and contexts are better suited to MT than others. Armed with this knowledge, translators and multilingual communities can make informed decisions regarding MT and translation technology in general.
CITATION STYLE
Lotz, S., & Van Rensburg, A. (2014). Translation technology explored: Has a three-year maturation period done Google Translate any good? Stellenbosch Papers in Linguistics Plus, 43(0), 235. https://doi.org/10.5842/43-0-205
Mendeley helps you to discover research relevant for your work.