Self-attentive model for headline generation

28Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Headline generation is a special type of text summarization task. While the amount of available training data for this task is almost unlimited, it still remains challenging, as learning to generate headlines for news articles implies that the model has strong reasoning about natural language. To overcome this issue, we applied recent Universal Transformer architecture paired with byte-pair encoding technique and achieved new state-of-the-art results on the New York Times Annotated corpus with ROUGE-L F1-score 24.84 and ROUGE-2 F1-score 13.48. We also present the new RIA corpus and reach ROUGE-L F1-score 36.81 and ROUGE-2 F1-score 22.15 on it.

Cite

CITATION STYLE

APA

Gavrilov, D., Kalaidin, P., & Malykh, V. (2019). Self-attentive model for headline generation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11438 LNCS, pp. 87–93). Springer Verlag. https://doi.org/10.1007/978-3-030-15719-7_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free