The Validity of Inferences From Locally Developed Assessments Administered Globally

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we first examine the challenges of score comparability associated with the use of assessments that are exported. By exported assessments, we mean assessments that are developed for domestic use and are then administered in other countries in either the same or a different language. Second, we provide suggestions to better support their valid and fair use. We illustrate these issues in the context of higher education assessments that are designed to serve different purposes—inform admissions decisions and assess student learning outcomes within one country (e.g., the United States) that are later used in other countries. In higher education, the use of exported assessments is on the rise due to increases in globalization, student mobility, and cross-national comparisons of student achievement. An increase in the use of exported assessments leads to more diverse test-taker populations and requires special attention due to possible sources of construct-irrelevant variance, which may threaten the score-based inferences made for various populations. Irrelevant sources of variance may emerge due to differences in opportunity to learn, curricular exposure, and lack of familiarity with the cultural references used in the assessments that are exported.

Cite

CITATION STYLE

APA

Oliveri, M. E., & Lawless, R. (2018). The Validity of Inferences From Locally Developed Assessments Administered Globally. ETS Research Report Series, 2018(1), 1–12. https://doi.org/10.1002/ets2.12221

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free