Feature selection for automated speech scoring

41Citations
Citations of this article
99Readers
Mendeley users who have this article in their library.

Abstract

Automated scoring systems used for the evaluation of spoken or written responses in language assessments need to balance good empirical performance with the interpretability of the scoring models. We compare several methods of feature selection for such scoring systems and show that the use of shrinkage methods such as Lasso regression makes it possible to rapidly build models that both satisfy the requirements of validity and intepretability, crucial in assessment contexts as well as achieve good empirical performance.

References Powered by Scopus

Regression Shrinkage and Selection Via the Lasso

36198Citations
N/AReaders
Get full text

L<inf>1</inf> penalized estimation in the Cox proportional hazards model

678Citations
N/AReaders
Get full text

L<inf>1</inf>-regularization path algorithm for generalized linear models

615Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Using bidirectional lstm recurrent neural networks to learn high-level abstractions of sequential features for automated scoring of non-native spontaneous speech

87Citations
N/AReaders
Get full text

Automated Scoring of Nonnative Speech Using the SpeechRater<sup>SM</sup> v. 5.0 Engine

56Citations
N/AReaders
Get full text

Assessing Students’ Use of Evidence and Organization in Response-to-Text Writing: Using Natural Language Processing for Rubric-Based Automated Scoring

46Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Loukina, A., Zechner, K., Chen, L., & Heilman, M. (2015). Feature selection for automated speech scoring. In 10th Workshop on Innovative Use of NLP for Building Educational Applications, BEA 2015 at the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2015 (pp. 12–19). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/w15-0602

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 34

74%

Researcher 8

17%

Lecturer / Post doc 3

7%

Professor / Associate Prof. 1

2%

Readers' Discipline

Tooltip

Computer Science 38

72%

Linguistics 9

17%

Social Sciences 3

6%

Engineering 3

6%

Save time finding and organizing research with Mendeley

Sign up for free