Exploratory analysis on the natural language processing models for task specific purposes

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Natural language processing (NLP) is a technology that has become widespread in the area of human language understanding and analysis. A range of text processing tasks such as summarisation, semantic analysis, classification, question-answering, and natural language inference are commonly performed using it. The dilemma of picking a model to help us in our task is still there. It’s becoming an impediment. This is where we are trying to determine which modern NLP models are better suited for the tasks set out above in order to compare them with datasets like SQuAD and GLUE. For comparison, BERT, RoBERTa, distilBERT, BART, ALBERT, and text-to-text transfer transformer (T5) models have been used in this study. The aim is to understand the underlying architecture, its effects on the use case and also to understand where it falls short. Thus, we were able to observe that RoBERTa was more effective against the models ALBERT, distilBERT, and BERT in terms of tasks related to semantic analysis, natural language inference, and question-answering. The reason is due to the dynamic masking present in RoBERTa. For summarisation, even though BART and T5 models have very similar architecture the BART model has performed slightly better than the T5 model.

Cite

CITATION STYLE

APA

Shidaganti, G., Shetty, R., Edara, T., Srinivas, P., & Tammineni, S. C. (2024). Exploratory analysis on the natural language processing models for task specific purposes. Bulletin of Electrical Engineering and Informatics, 13(2), 1245–1255. https://doi.org/10.11591/eei.v13i2.6360

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free