Unsupervised neural text simplification

63Citations
Citations of this article
210Readers
Mendeley users who have this article in their library.

Abstract

The paper presents a first attempt towards unsupervised neural text simplification that relies only on unlabeled text corpora. The core framework is composed of a shared encoder and a pair of attentional-decoders, crucially assisted by discrimination-based losses and denoising. The framework is trained using unlabeled text collected from en-Wikipedia dump. Our analysis (both quantitative and qualitative involving human evaluators) on public test data shows that the proposed model can perform text-simplification at both lexical and syntactic levels, competitive to existing supervised methods. It also outperforms viable unsupervised baselines. Adding a few labeled pairs helps improve the performance further.

Cite

CITATION STYLE

APA

Surya, S., Mishra, A., Laha, A., Jain, P., & Sankaranarayanan, K. (2020). Unsupervised neural text simplification. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 2058–2068). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1198

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free