The potential of using a large language model (LLM) as a knowledge base (KB) has sparked significant interest. To manage the knowledge acquired by LLMs, we need to ensure that the editing of learned facts respects internal logical constraints, which are known as dependency of knowledge. Existing work on editing LLMs has partially addressed the issue of dependency, when the editing of a fact should apply to its lexical variations without disrupting irrelevant ones. However, they neglect the dependency between a fact and its logical implications. We propose an evaluation protocol with an accompanying question-answering dataset, DepEdit, that provides a comprehensive assessment of the editing process considering the above notions of dependency. Our protocol involves setting up a controlled environment in which we edit facts and monitor their impact on LLMs, along with their implications based on If-Then rules. Extensive experiments on DepEdit show that existing knowledge editing methods are sensitive to the surface form of knowledge, and that they have limited performance in inferring the implications of edited facts.
CITATION STYLE
Li, Z., Arous, I., Reddy, S., & Cheung, J. C. K. (2023). Evaluating Dependencies in Fact Editing for Language Models: Specificity and Implication Awareness. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 7623–7626). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.511
Mendeley helps you to discover research relevant for your work.