Use of Automated Scoring in Spoken Language Assessments for Test Takers With Speech Impairments

  • Loukina A
  • Buzick H
N/ACitations
Citations of this article
29Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open‐ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses on one type of scoring technology, automatic speech scoring (the SpeechRater SM automated scoring engine); one type of assessment, spontaneous spoken English by nonnative adults (six TOEFL iBT ® test speaking items per test taker); and one category of disability, speech impairments. The results show discrepancies between human and SpeechRater scores for speakers with documented speech or hearing impairments who receive accommodations and for speakers whose responses were deferred to the scoring leader by human raters because the responses exhibited signs of a speech impairment. SpeechRater scores for these studied groups tended to be higher than the human scores. Based on a smaller subsample, the word error rate was higher for these groups relative to the control group, suggesting that the automatic speech recognition system contributed to the discrepancies between SpeechRater and human scores. Report Number: ETS RR‐17–42

Cite

CITATION STYLE

APA

Loukina, A., & Buzick, H. (2017). Use of Automated Scoring in Spoken Language Assessments for Test Takers With Speech Impairments. ETS Research Report Series, 2017(1), 1–10. https://doi.org/10.1002/ets2.12170

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free