Sentence Simplification Based on Multi-Stage Encoder Model

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Sentence simplification aims to simplify a complex sentences while retaining its main idea. It is one of the most important tasks in natural language processing. Recent works addressed the task with sequence-to-sequence(Seq2seq) model. However, these conventional Seq2seq models usually based on a single-stage encoder, which only read the source complex sentence once, as a result, it was hard to extract the representational features of the source sentence precisely. To resolve the problem, we proposed a multi-stage encoder based Seq2seq model for sentence simplification. Specificly, there were three stages in the encoder of proposed model, namely N-gram reading stage, glance-over stage and final encoding stage. The N-gram reading stage catched N-gram feature embedding for other stage and the glance-over stage extracted local and global information about the source sentence. The final encoding stage took advantage of the information extracted by the former two stage to encode source sentence better. Then, it introduced a novel attention connection method which could help the decoder to make full use of the information of encoder. Experiments on three public datasets demonstrated the proposed model that outperforms state-of-the-art baseline simplification systems.

Cite

CITATION STYLE

APA

Zhang, L., & Deng, H. (2019). Sentence Simplification Based on Multi-Stage Encoder Model. IEEE Access, 7, 174248–174256. https://doi.org/10.1109/ACCESS.2019.2957160

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free