Transfer Learning in Natural Language Processing: A Survey

  • Dhyani B
N/ACitations
Citations of this article
84Readers
Mendeley users who have this article in their library.

Abstract

ABSTRACT Transfer learning is a discipline that is expanding quickly within the realm of natural language processing (NLP) and machine learning. It is the application of previously learned models to the solution of a variety of problems that are connected to one another. This paper presents a comprehensive survey of transfer learning techniques in NLP, focusing on five key classification algorithms: (1) BERT, (2) GPT, (3) ELMo, (4) RoBERTa, and (5) ALBERT. We discuss the fundamental concepts, methodologies, and performance benchmarks of each algorithm, highlighting the various approaches taken to leverage pre-existing knowledge for effective learning. Furthermore, we provide an overview of the latest advancements and challenges in transfer learning for NLP, along with promising directions for future research in this domain.

Cite

CITATION STYLE

APA

Dhyani, B. (2021). Transfer Learning in Natural Language Processing: A Survey. Mathematical Statistician and Engineering Applications, 70(1), 303–311. https://doi.org/10.17762/msea.v70i1.2312

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free