Language representation models: An overview

15Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

Abstract

In the last few decades, text mining has been used to extract knowledge from free texts. Applying neural networks and deep learning to natural language processing (NLP) tasks has led to many accomplishments for real-world language problems over the years. The developments of the last five years have resulted in techniques that have allowed for the practical application of transfer learning in NLP. The advances in the field have been substantial, and the milestone of outperforming human baseline performance based on the general language understanding evaluation has been achieved. This paper implements a targeted literature review to outline, describe, explain, and put into context the crucial techniques that helped achieve this milestone. The research presented here is a targeted review of neural language models that present vital steps towards a general language representation model.

Cite

CITATION STYLE

APA

Schomacker, T., & Tropmann-Frick, M. (2021). Language representation models: An overview. Entropy, 23(11). https://doi.org/10.3390/e23111422

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free