Linguistic features for readability assessment

56Citations
Citations of this article
119Readers
Mendeley users who have this article in their library.

Abstract

Readability assessment aims to automatically classify text by the level appropriate for learning readers. Traditional approaches to this task utilize a variety of linguistically motivated features paired with simple machine learning models. More recent methods have improved performance by discarding these features and utilizing deep learning models. However, it is unknown whether augmenting deep learning models with linguistically motivated features would improve performance further. This paper combines these two approaches with the goal of improving overall model performance and addressing this question. Evaluating on two large readability corpora, we find that, given sufficient training data, augmenting deep learning models with linguistically motivated features does not improve state-of-the-art performance. Our results provide preliminary evidence for the hypothesis that the state-of-the art deep learning models represent linguistic features of the text related to readability. Future research on the nature of representations formed in these models can shed light on the learned features and their relations to linguistically motivated ones hypothesized in traditional approaches.

Cite

CITATION STYLE

APA

Deutsch, T., Jasbi, M., & Shieber, S. (2020). Linguistic features for readability assessment. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 1–17). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.bea-1.1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free