Extractive Text Summarization Using BERT

4Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automatic Text Summarization is one of the most challenging and exciting problems in Natural Language Processing (NLP). With the data explosion, summarizing textual content effectively without losing the original meaning, but reducing the text size and reading time for a user is the need of the hour. Recently, with the use of deep learning models like recurrent neural networks, long short-term memory networks for text summarization have led to high performance. However, the breakthrough has been with transformer-based Bidirectional Encoder Representation from Transformers (BERT) which is non-sequential. This paper presents extractive text summarization using BERT to obtain high accuracy of average Rogue1—41.47, compression ratio of 60%, and reduction in user reading time by 66% on CNN Daily News dataset. Therefore, BERT based extractive text summarization is highly effective.

Cite

CITATION STYLE

APA

Patil, P., Rao, C., Reddy, G., Ram, R., & Meena, S. M. (2022). Extractive Text Summarization Using BERT. In Lecture Notes in Networks and Systems (Vol. 237, pp. 741–747). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-16-6407-6_63

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free