Text simplification with self-attention-based pointer-generator networks

6Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Text Simplification aims to reduce semantic complexity of text, while still retaining the semantic meaning. Recent work has started exploring neural text simplification (NTS) using the Sequence-to-sequence (Seq2seq) attentional model which achieves success in many text generation tasks. However, dealing with long-range dependencies and out-of-vocabulary (OOV) words remain the challenge of Text Simplification task. In this paper, in order to solve these problems, we propose a text simplification model that incorporates self-attention mechanism and pointer-generator network. Our experiments on Wikipedia and Simple Wikipedia aligned datasets demonstrate that our model is outperforms the baseline systems.

Cite

CITATION STYLE

APA

Li, T., Li, Y., Qiang, J., & Yuan, Y. H. (2018). Text simplification with self-attention-based pointer-generator networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11305 LNCS, pp. 537–545). Springer Verlag. https://doi.org/10.1007/978-3-030-04221-9_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free