Looking beyond scores: validating a CEFR-based university speaking assessment in Mainland China

4Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Background: The present study examined the validity of a university-based speaking assessment (Test of Oral Proficiency in English, TOPE for short) in mainland China. The speaking assessment was developed to meet the standards (Standard for Oral Proficiency in English, SOPE for short) set for teaching and learning of the oral English by the university. Methods: The degree of interaction among candidates in the second part of the test (Presentation and Discussion) was analyzed in terms of frequency of the language functions. These functions were reflected in the test syllabus, which was developed based on the CEFR. Results: Quantitative analysis revealed that majority of language functions intended by the test syllabus has been elicited in candidate performances. At the same time, students showed a relatively lack of interactional functions. Further qualitative investigations identified some speaking features that have the potential to affect scores of the candidate. Conclusions: Merits and limitations of using the CEFR for developing the speaking assessment (i.e., applicability of the CEFR for TOPE) were discussed. The study was concluded with its limitations and suggestions for future study.

Cite

CITATION STYLE

APA

Liu, L., & Jia, G. (2017). Looking beyond scores: validating a CEFR-based university speaking assessment in Mainland China. Language Testing in Asia, 7(1). https://doi.org/10.1186/s40468-017-0034-3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free