Abstract
With the recent advances in deep learning, different approaches to improving pre-trained language models (PLMs) have been proposed. PLMs have advanced state-of-the-art (SOTA) performance on various natural language processing (NLP) tasks such as machine translation, text classification, question answering, text summarization, information retrieval, recommendation systems, named entity recognition, etc. In this paper, we provide a comprehensive review of prior embedding models as well as current breakthroughs in the field of PLMs. Then, we analyse and contrast the various models and provide an analysis of the way they have been built (number of parameters, compression techniques, etc.). Finally, we discuss the major issues and future directions for each of the main points.
Author supplied keywords
Cite
CITATION STYLE
Mars, M. (2022, September 1). From Word Embeddings to Pre-Trained Language Models: A State-of-the-Art Walkthrough. Applied Sciences (Switzerland). MDPI. https://doi.org/10.3390/app12178805
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.