Validation of sub-constructs in reading comprehension tests using teachers’ classification of cognitive targets

8Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Reading comprehension is often treated as a multidimensional construct. In many reading tests, items are distributed over reading process categories to represent the subskills expected to constitute comprehension. This study explores (a) the extent to which specified subskills of reading comprehension tests are conceptually conceivable to teachers, who score and use national reading test results and (b) the extent to which teachers agree on how to locate and define item difficulty in terms of expected text comprehension. Eleven teachers of Swedish were asked to classify items from a national reading test in Sweden by process categories similar to the categories used in the PIRLS reading test. They were also asked to describe the type of comprehension necessary for solving the items. Findings of the study suggest that the reliability of item classification is limited and that teachers’ perception of item difficulty is diverse. Although the data set in the study is limited, the findings indicate, in line with recent validity theory, that the division of reading comprehension into subskills by cognitive process level will require further validity evidence and should be treated with caution. Implications for the interpretation of test scores and for test development are discussed.

Cite

CITATION STYLE

APA

Tengberg, M. (2018). Validation of sub-constructs in reading comprehension tests using teachers’ classification of cognitive targets. Language Assessment Quarterly, 15(2), 169–182. https://doi.org/10.1080/15434303.2018.1448820

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free