Sequence-to-Sequence Models for Automated Text Simplification

10Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A key writing skill is the capability to clearly convey desired meaning using available linguistic knowledge. Consequently, writers must select from a large array of idioms, vocabulary terms that are semantically equivalent, and discourse features that simultaneously reflect content and allow readers to grasp meaning. In many cases, a simplified version of a text is needed to ensure comprehension on the part of a targeted audience (e.g., second language learners). To address this need, we propose an automated method to simplify texts based on paraphrasing. Specifically, we explore the potential for a deep learning model, previously used for machine translation, to learn a simplified version of the English language within the context of short phrases. The best model, based on an Universal Transformer architecture, achieved a BLEU score of 66.01. We also evaluated this model’s capability to perform similar transformation to texts that were simplified by human experts at different levels.

Cite

CITATION STYLE

APA

Botarleanu, R. M., Dascalu, M., Crossley, S. A., & McNamara, D. S. (2020). Sequence-to-Sequence Models for Automated Text Simplification. In Lecture Notes in Computer Science (Vol. 12164 LNAI, pp. 31–36). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-52240-7_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free