A Generalizability Theory Study to Examine Sources of Score Variance in Third-Party Evaluations Used in Decision-Making for Graduate School Admissions

7Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Scores from noncognitive measures are increasingly valued for their utility in helping to inform postsecondary admissions decisions. However, their use has presented challenges because of faking, response biases, or subjectivity, which standardized third-party evaluations (TPEs) can help minimize. Analysts and researchers using TPEs, however, need to be mindful of the potential for construct-irrelevant differences that may arise in TPEs due to differences in evaluators' rating approaches, which introduces measurement error. Research on sources of construct-irrelevant variance in TPEs is scarce. We address this paucity by conducting generalizability theory (G theory) analyses using TPE data that informs postsecondary admissions decisions. We also demonstrate an approach to assess the size of interevaluator variability and conduct a decision study to determine the number of evaluators necessary to achieve the desired generalizability coefficient. We illustrate these approaches using a TPE whereby applicants select their evaluators, leading to a situation where most evaluators solely rate one applicant. We conclude by presenting strategies to improve the design of TPEs to help increase confidence in their use.

Cite

CITATION STYLE

APA

McCaffrey, D. F., Oliveri, M. E., & Holtzman, S. (2018). A Generalizability Theory Study to Examine Sources of Score Variance in Third-Party Evaluations Used in Decision-Making for Graduate School Admissions. ETS Research Report Series, 2018(1), 1–17. https://doi.org/10.1002/ets2.12225

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free