Examining the Accuracy of a Conversation-Based Assessment in Interpreting English Learners' Written Responses

6Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Substantial progress has been made toward applying technology enhanced conversation-based assessments (CBAs) to measure the English-language proficiency of English learners (ELs). CBAs are conversation-based systems that use conversations among computer-animated agents and a test taker. We expanded the design and capability of prior conversation-based instructional and assessment systems and developed a CBA designed to measure the English language skills and the mathematics knowledge of middle school ELs. The prototype CBA simulates an authentic and engaging mathematics classroom where the test taker interacts with two virtual agents to solve math problems. We embedded feedback and supports that are triggered by how the CBA interprets students' written responses. In this study, we administered the CBA to middle school ELs (N = 82) residing in the United States. We examined the extent to which the CBA system was able to consistently interpret the students' responses (722 responses for the 82 students). The study findings helped us to understand the factors that affect the accuracy of the CBA system's interpretations and shed light on how to improve CBA systems that incorporate scaffolding.

Cite

CITATION STYLE

APA

Lopez, A. A., Guzman-Orth, D., Zapata-Rivera, D., Forsyth, C. M., & Luce, C. (2021). Examining the Accuracy of a Conversation-Based Assessment in Interpreting English Learners’ Written Responses. ETS Research Report Series, 2021(1), 1–15. https://doi.org/10.1002/ets2.12315

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free