Is academic discourse accurate when supported by machine translation?

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Classroom discourse has aroused interest among scholars and educators (Deroey, 2015; Mauranen, 2012; Hyland, 2010), particularly the use of metadiscoursal markers. However, little attention has been paid to these features when they are supported by machine translation (MT) engines in content and language integrated learning (CLIL) contexts. The aim of this paper is to describe the use and frequency of hedges and boosters employed in the fields of History and Heritage and Psychology and analyse the accuracy of the equivalents obtained from two MT engines, namely DeepL and Google Translate. To this end, a small corpus consisting of two seminars was compiled and qualitative and quantitative methods were implemented to determine the frequency and the accuracy of the linguistic structures under study. The results revealed that even though the interactional devices provided by MT engines are highly accurate, some omissions and mistranslations may occur. These findings may be valuable for CLIL lecturers interested in classroom discourse, as well as for translation researchers working with bilingual and multilingual corpora who seek to assess the accuracy of translation tools.

Cite

CITATION STYLE

APA

Bellés-Calvera, L., & Quintana, R. C. (2022). Is academic discourse accurate when supported by machine translation? Quaderns de Filologia: Estudis Linguistics, 27, 171–201. https://doi.org/10.7203/qf.0.24671

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free