In this study, we aim to automatically score the spoken responses from an international English assessment targeted to non-native English-speaking children aged 8 years and above. In contrast to most previous studies focusing on scoring of adult non-native English speech, we explored automated scoring of child language assessment. We developed automated scoring models based on a large set of features covering delivery (pronunciation and fluency), language use (grammar and vocabulary), and topic development (coherence). In particular, in order to assess the level of grammatical development, we used a child language metric that measures syntactic proficiency in emerging language in children. Due to acoustic and linguistic differences between child and adult speech, the automated speech recognition (ASR) of child speech has been a challenging task. This problem may increase difficulty of automated scoring. In order to investigate the impact of ASR errors on automated scores, we compared scoring models based on features from ASR transcriptions with ones based on human transcriptions. Our results show that there is potential for the automatic scoring of spoken non-native child language. The best performing model based on ASR transcriptions achieved a correlation of 0.86 with human-rated scores.
CITATION STYLE
Hassanali, K. N., Yoon, S. Y., & Chen, L. (2015). Automatic Scoring of Non-native Children’s Spoken Language Proficiency. In Speech and Language Technology in Education, SLaTE 2015 (pp. 13–18). The International Society for Computers and Their Applications (ISCA). https://doi.org/10.21437/slate.2015-3
Mendeley helps you to discover research relevant for your work.