Fine Tuning Transformer Based BERT Model for Generating the Automatic Book Summary

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Major text summarization research is mainly focusing on summarizing short documents and very few works is witnessed for long document summarization. Additionally, extractive summarization is more addressed as compared with abstractive summarization. Abstractive summarization, unlike extractive summarization, does not only copy essential words from the original text but requires paraphrasing to get close to human generated summary. The machine learning, deep learning models are adapted to contemporary pre-trained models like transformers. Transformer based Language models gaining a lot of attention because of self-supervised training while fine-tuning for Natural Language Processing (NLP) downstream task like text summarization. The proposed work is an attempt to investigate the use of transformers for abstraction. The proposed work is tested for book especially as a long document for evaluating the performance of the model.

Cite

CITATION STYLE

APA

Howlader, P., Paul, P., Madavi, M., Bewoor, L., & Deshpande, V. S. (2022). Fine Tuning Transformer Based BERT Model for Generating the Automatic Book Summary. International Journal on Recent and Innovation Trends in Computing and Communication, 10, 347–352. https://doi.org/10.17762/ijritcc.v10i1s.5902

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free