Candidate evaluation strategies for improved difficulty prediction of language tests

17Citations
Citations of this article
81Readers
Mendeley users who have this article in their library.

Abstract

Language proficiency tests are a useful tool for evaluating learner progress, if the test difficulty fits the level of the learner. In this work, we describe a generalized framework for test difficulty prediction that is applicable to several languages and test types. In addition, we develop two ranking strategies for candidate evaluation inspired by automatic solving methods based on language model probability and semantic relatedness. These ranking strategies lead to significant improvements for the difficulty prediction of cloze tests.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Beinborn, L., Zesch, T., & Gurevych, I. (2015). Candidate evaluation strategies for improved difficulty prediction of language tests. In 10th Workshop on Innovative Use of NLP for Building Educational Applications, BEA 2015 at the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2015 (pp. 1–11). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/w15-0601

Readers over time

‘15‘16‘17‘18‘19‘20‘21‘22‘23‘24‘2506121824

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 24

71%

Researcher 7

21%

Lecturer / Post doc 2

6%

Professor / Associate Prof. 1

3%

Readers' Discipline

Tooltip

Computer Science 31

72%

Linguistics 9

21%

Agricultural and Biological Sciences 2

5%

Neuroscience 1

2%

Save time finding and organizing research with Mendeley

Sign up for free
0