Using Large Language Models to Develop Readability Formulas for Educational Settings

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Readability formulas can be used to better match readers and texts. Current state-of-the-art readability formulas rely on large language models like transformer models (e.g., BERT) that model language semantics. However, the size and runtimes make them impractical in educational settings. This study examines the effectiveness of new readability formulas developed on the CommonLit Ease of Readability (CLEAR) corpus using more efficient sentence-embedding models including doc2vec, Universal Sentence Encoder, and Sentence BERT. This study compares sentence-embedding models to traditional readability formulas, newer NLP-informed linguistic feature formulas, and newer BERT-based models. The results indicate that sentence-embedding readability formulas perform well and are practical for use in various educational settings. The study also introduces an open-source NLP website to readily assess the readability of texts along with an application programming interface (API) that can be integrated into online educational learning systems to better match texts to readers.

Cite

CITATION STYLE

APA

Crossley, S., Choi, J. S., Scherber, Y., & Lucka, M. (2023). Using Large Language Models to Develop Readability Formulas for Educational Settings. In Communications in Computer and Information Science (Vol. 1831 CCIS, pp. 422–427). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-36336-8_66

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free