Sentence-Level Readability Assessment for L2 Chinese Learning

6Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automatic assessment of sentence readability level can support educators in selecting sentence examples suitable for different learning levels to complement teaching materials. Although there exists extensive research on document-level and passage-level Chinese readability assessment, the sentence-level evaluation remains little explored. We bridge the gap by providing a research framework and a large corpus of nearly 40,000 sentences with ten-level readability annotation. We design experiments to analyze the influence of 88 linguistic features on sentence complexity and results suggest that the linguistic features can significantly improve the predictive performance with the highest of 70.78% distance-1 adjacent accuracy. Model comparison also confirms that our proposed set of features can reduce the bias in prediction without adding variances. We hope that our corpus, feature sets, and experimental validation can provide educators and linguists with more language resources, enlightenment, and automatic tools for future related research.

Cite

CITATION STYLE

APA

Lu, D., Qiu, X., & Cai, Y. (2020). Sentence-Level Readability Assessment for L2 Chinese Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11831 LNAI, pp. 381–392). Springer. https://doi.org/10.1007/978-3-030-38189-9_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free