A Review of Evidence Presented in Support of Three Key Claims in the Validity Argument for the TextEvaluator ® Text Analysis Tool

  • Sheehan K
N/ACitations
Citations of this article
29Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The TextEvaluator ® text analysis tool is a fully automated text complexity evaluation tool designed to help teachers and other educators select texts that are consistent with the text complexity guidelines specified in the Common Core State Standards ( CCSS ). This paper provides an overview of the TextEvaluator measurement approach and summarizes evidence related to three key claims in the TextEvaluator validity argument: (a) TextEvaluator has succeeded in expanding construct coverage beyond the two dimensions of text variation that are traditionally assessed by readability metrics; (b) the TextEvaluator strategy of estimating distinct prediction models for informational, literary, and mixed texts has succeeded in generating text complexity predictions that exhibit little, if any, genre bias; and (c) TextEvaluator scores are highly correlated with text complexity judgments provided by human experts, including judgments generated via the inheritance method and judgments generated via the exemplar method. Implications with respect to the goal of helping teachers and other educators select texts that are closely aligned with the accelerated text complexity exposure trajectory outlined in the CCSS are discussed. Report Number: ETS RR‐16–12

Cite

CITATION STYLE

APA

Sheehan, K. M. (2016). A Review of Evidence Presented in Support of Three Key Claims in the Validity Argument for the  TextEvaluator ®  Text Analysis Tool. ETS Research Report Series, 2016(1), 1–15. https://doi.org/10.1002/ets2.12100

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free