Editing Large Language Models: Problems, Methods, and Opportunities

149Citations
Citations of this article
112Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite the ability to train capable LLMs, the methodology for maintaining their relevancy and rectifying errors remains elusive. To this end, the past few years have witnessed a surge in techniques for editing LLMs, the objective of which is to efficiently alter the behavior of LLMs within a specific domain without negatively impacting performance across other inputs. This paper embarks on a deep exploration of the problems, methods, and opportunities related to model editing for LLMs. In particular, we provide an exhaustive overview of the task definition and challenges associated with model editing, along with an in-depth empirical analysis of the most progressive methods currently at our disposal. We also build a new benchmark dataset to facilitate a more robust evaluation and pinpoint enduring issues intrinsic to existing techniques. Our objective is to provide valuable insights into the effectiveness and feasibility of each editing technique, thereby assisting the community in making informed decisions on the selection of the most appropriate method for a specific task or context.

Cite

CITATION STYLE

APA

Yao, Y., Wang, P., Tian, B., Cheng, S., Li, Z., Deng, S., … Zhang, N. (2023). Editing Large Language Models: Problems, Methods, and Opportunities. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 10222–10240). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.632

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free