Using MFRM and SEM in the Validation of Analytic Rating Scales of an English Speaking Assessment

  • Fan J
  • Bond T
N/ACitations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This study reports a preliminary investigation into the construct validity of an analytic rating scale developed for a school-based English speaking test. Informed by the theory of interpretative validity argument, this study examined the plausibility and accuracy of three warrants which were deemed essential to the construct validity of the rating scale. Methodologically, this study utilized Many-Facets Rasch Model (MFRM) and Structural Equation Modeling (SEM) in conjunction to examine the three warrants and their respective rebuttals. Though MFRM analysis largely supported the first two warrants, the results indicated that the category structure of the rating scale did not function as intended, and hence needed further revisions. In SEM analysis, multitrait multimethod (MTMM) confirmatory factor analysis (CFA) model was employed, whereby four MTMM models were specified, evaluated, and compared. The results lent support to the third warrant, but raised legitimate concerns over common method bias. The study has implications for the future revisions of the rating scale and the speaking assessment in the interest of improved validity. Meanwhile, this study has methodological implications for performance assessment constructors and rating scale validators.

Cite

CITATION STYLE

APA

Fan, J., & Bond, T. (2016). Using MFRM and SEM in the Validation of Analytic Rating Scales of an English Speaking Assessment. In Pacific Rim Objective Measurement Symposium (PROMS) 2015 Conference Proceedings (pp. 29–50). Springer Singapore. https://doi.org/10.1007/978-981-10-1687-5_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free