This work introduces new methods for detecting non-scorable tests, i.e., tests that cannot be accurately scored automatically, in educational applications of spoken language proficiency assessment. Those include cases of unreliable automatic speech recognition (ASR), often because of noisy, off-topic, foreign or unintelligible speech. We examine features that estimate signal-derived syllable information and compare it with ASR results in order to detect responses with problematic recognition. Further, we explore the usefulness of language model based features, both for language models that are highly constrained to the spoken task, and for task independent phoneme language models. We validate our methods on a challenging dataset of young English language learners (ELLs) interacting with an automatic spoken assessment system. Our proposed methods achieve comparable performance compared to existing non-scorable detection approaches, and lead to a 21% relative performance increase when combined with existing approaches.
CITATION STYLE
Metallinou, A., & Cheng, J. (2014). Syllable and language model based features for detecting non-scorable tests in spoken language proficiency assessment applications. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 89–98). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/w14-1811
Mendeley helps you to discover research relevant for your work.