Adversarial Reinforcement Learning for Chinese Text Summarization

2Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper proposes a novel Adversarial Reinforcement Learning architecture for Chinese text summarization. Previous abstractive methods commonly use Maximum Likelihood Estimation (MLE) to optimize the generative models, which makes auto-generated summary less incoherent and inaccuracy. To address this problem, we innovatively apply the Adversarial Reinforcement Learning strategy to narrow the gap between the generated summary and the human summary. In our model, we use a generator to generate summaries, a discriminator to distinguish between generated summaries and real ones, and reinforcement learning (RL) strategy to iteratively evolve the generator. Besides, in order to better tackle Chinese text summarization, we use a character-level model rather than a word-level one and append Text-Attention in the generator. Experiments were run on two Chinese corpora, respectively consisting of long documents and short texts. Experimental Results showed that our model significantly outperforms previous deep learning models on rouge score.

Cite

CITATION STYLE

APA

Xu, H., Cao, Y., Shang, Y., Liu, Y., Tan, J., & Guo, L. (2018). Adversarial Reinforcement Learning for Chinese Text Summarization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10862 LNCS, pp. 519–532). Springer Verlag. https://doi.org/10.1007/978-3-319-93713-7_47

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free