Linguistic Feature Representation with Statistical Relational Learning for Readability Assessment

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Traditional NLP model for readability assessment represents document as vector of words or vector of linguistic features that may be sparse, discrete, and ignoring the latent relations among features. We observe from data and linguistics theory that a document’s linguistic features are not necessarily conditionally independent. To capture the latent relations among linguistic features, we propose to build feature graphs and learn distributed representation with Statistical Relational Learning. We then project the document vectors onto the linguistic feature embedding space to produce linguistic feature knowledge-enriched document representation. We showcase this idea with Chinese L1 readability classification experiments and achieve positive results. Our proposed model performs better than traditional vector space models and other embedding based models for current data set and deserves further exploration.

Cite

CITATION STYLE

APA

Qiu, X., Lu, D., Shen, Y., & Cai, Y. (2019). Linguistic Feature Representation with Statistical Relational Learning for Readability Assessment. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11839 LNAI, pp. 360–369). Springer. https://doi.org/10.1007/978-3-030-32236-6_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free