A Bibliometric Review of Large Language Models Research from 2017 to 2023

  • Fan L
  • Li L
  • Ma Z
  • et al.
8Citations
Citations of this article
200Readers
Mendeley users who have this article in their library.

Abstract

Large language models (LLMs), such as OpenAI’s Generative Pre-trained Transformer (GPT), are a class of language models that have demonstrated outstanding performance across a range of natural language processing (NLP) tasks. LLMs have become a highly sought-after research area because of their ability to generate human-like language and their potential to revolutionize science and technology. In this study, we conduct bibliometric and discourse analyses of scholarly literature on LLMs. Synthesizing over 5,000 publications, this paper serves as a roadmap for researchers, practitioners, and policymakers to navigate the current landscape of LLMs research. We present the research trends from 2017 to early 2023, identifying patterns in research paradigms and collaborations. We start with analyzing the core algorithm developments and NLP tasks that are fundamental in LLMs research. We then investigate the applications of LLMs in various fields and domains, including medicine, engineering, social science, and humanities. Our review also reveals the dynamic, fast-paced evolution of LLMs research. Overall, this paper offers valuable insights into the current state, impact, and potential of LLMs research and its applications.

Cite

CITATION STYLE

APA

Fan, L., Li, L., Ma, Z., Lee, S., Yu, H., & Hemphill, L. (2024). A Bibliometric Review of Large Language Models Research from 2017 to 2023. ACM Transactions on Intelligent Systems and Technology. https://doi.org/10.1145/3664930

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free