Improving Sequential Model Editing with Fact Retrieval

10Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The task of sequential model editing is to fix erroneous knowledge in Pre-trained Language Models (PLMs) efficiently, precisely and continuously. Although existing methods can deal with a small number of modifications, these methods experience a performance decline or require additional annotated data, when the number of edits increases. In this paper, we propose a Retrieval Augmented Sequential Model Editing framework (RASE) that leverages factual information to enhance editing generalization and to guide the identification of edits by retrieving related facts from the fact-patch memory we constructed. Our main findings are: (i) State-ofthe-art models can hardly correct massive mistakes stably and efficiently; (ii) Even if we scale up to thousands of edits, RASE can significantly enhance editing generalization and maintain consistent performance and efficiency; (iii) RASE can edit large-scale PLMs and increase the performance of different editors. Moreover, it can integrate with ChatGPT and further improve performance. Our code and data are available at: https://github.com/sev777/RASE.

Cite

CITATION STYLE

APA

Han, X., Li, R., Tan, H., Wang, Y., Chai, Q., & Pan, J. Z. (2023). Improving Sequential Model Editing with Fact Retrieval. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 11209–11224). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.749

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free