Quantifying the quality difference between L1 and L2 essays: A rating procedure with bilingual raters and L1 and L2 benchmark essays

32Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.
Get full text

Abstract

It is the consensus that, as a result of the extra constraints placed on working memory, texts written in a second language (L2) are usually of lower quality than texts written in the first language (L1) by the same writer. However, no method is currently available for quantifying the quality difference between L1 and L2 texts. In the present study, we tested a rating procedure for enabling quality judgments of L1 and L2 texts on a single scale. Two main features define this procedure: 1) raters are bilingual or near native users of both the L1 and L2; 2) ratings are performed with L1 and L2 benchmark texts. Direct comparisons of observed L1 and L2 scores are only warranted if the ratings with L1 and L2 benchmarks are parallel tests and if the ratings are reliable. Results showed that both conditions are met. Effect sizes (Cohen's d) indicate that, while score variances are large, there is a relatively large added L2 effect: in the investigated population, L2 text scores were much lower than L1 text scores. The tested rating procedure is a promising method for cross-national comparisons of writing proficiency. © The Author(s) 2012.

Cite

CITATION STYLE

APA

Tillema, M., van den Bergh, H., Rijlaarsdam, G., & Sanders, T. (2013). Quantifying the quality difference between L1 and L2 essays: A rating procedure with bilingual raters and L1 and L2 benchmark essays. Language Testing, 30(1), 71–97. https://doi.org/10.1177/0265532212442647

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free