Automated Scoring of Speaking Tasks in the Test of English‐for‐Teaching ( TEFT ™)

  • Zechner K
  • Chen L
  • Davis L
  • et al.
N/ACitations
Citations of this article
28Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This research report presents a summary of research and development efforts devoted to creating scoring models for automatically scoring spoken item responses of a pilot administration of the Test of English‐for‐Teaching ( TEFT ™ ) within the ELTeach ™ framework. The test consists of items for all four language modalities: reading, listening, writing, and speaking. This report only addresses the speaking items, which elicit responses ranging from highly predictable to semipredictable speech from nonnative English teachers or teacher candidates. We describe the components of the system for automated scoring, comprising an automatic speech recognition ( ASR ) system, a set of filtering models to flag nonscorable responses, linguistic measures relating to the various construct subdimensions, and multiple linear regression scoring models for each item type. Our system is set up to simulate a hybrid system whereby responses flagged as potentially nonscorable by any component of the filtering model are routed to a human rater, and all other responses are scored automatically by our system. Report Number: ETS RR–15–31

Cite

CITATION STYLE

APA

Zechner, K., Chen, L., Davis, L., Evanini, K., Lee, C. M., Leong, C. W., … Yoon, S. (2015). Automated Scoring of Speaking Tasks in the Test of English‐for‐Teaching (  TEFT  TM). ETS Research Report Series, 2015(2), 1–17. https://doi.org/10.1002/ets2.12080

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free