Validating Research-Abstract Writing Assessment Through Latent Regression Modeling and Rater’s Lenses

0Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This study validates the measure, research-abstract writing assessment (RAWA), with two rating scales of global move of rhetorical purpose and local pattern of language use in applied linguistics (the scale level/score ranging from 0 to 5). The study adopted the embedded design of mixed-methods research that included both the quantitative latent regression model (LRM) for testing how the examinees’ (30 EFL doctoral students, 30 EFL master’s students) RAWA responses can be explained by examinee-group competence, scale-by-level difficulty of two scales, and rater expertise (5 raters); and the qualitative interviews on five raters’ perceptions. The LRM results revealed the scale-level difficulty effect, namely across the scales level 1 and level 5 of the global move being the easiest and the most difficult. The expert raters rated with lower scores. They also adopted the advanced subscales (i.e., content elements, brevity) as criteria and conducted self-monitoring while rating. The findings reveal the sub-competences of research-abstract writing, namely the global move sub-competence of move and content elements and the local pattern sub-competence of language use and brevity. Pedagogically, EFL graduate students should further develop the sub-competences of content element and brevity, once mastering those of move and language use as the basics.

Cite

CITATION STYLE

APA

Lin, M. C. (2019). Validating Research-Abstract Writing Assessment Through Latent Regression Modeling and Rater’s Lenses. English Teaching and Learning, 43(3), 297–315. https://doi.org/10.1007/s42321-019-00030-5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free