Multiattentive Recurrent Neural Network Architecture for Multilingual Readability Assessment

64Citations
Citations of this article
80Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a multiattentive recurrent neural network architecture for automatic multilingual readability assessment. This architecture considers raw words as its main input, but internally captures text structure and informs its word attention process using other syntax-and morphology-related datapoints, known to be of great importance to readability. This is achieved by a multiattentive strategy that allows the neural network to focus on specific parts of a text for predicting its reading level. We conducted an exhaustive evaluation using data sets targeting multiple languages and prediction task types, to compare the proposed model with traditional, state-of-the-art, and other neural network strategies.

Cite

CITATION STYLE

APA

Azpiazu, I. M., & Pera, M. S. (2019). Multiattentive Recurrent Neural Network Architecture for Multilingual Readability Assessment. Transactions of the Association for Computational Linguistics, 7, 421–436. https://doi.org/10.1162/tacl_a_00278

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free