Adaptable and Interpretable Neural Memory Over Symbolic Knowledge

51Citations
Citations of this article
242Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Past research has demonstrated that large neural language models (LMs) encode surprising amounts of factual information: however, augmenting or modifying this information requires modifying a corpus and retraining, which is computationally expensive. To address this problem, we develop a neural LM that includes an interpretable neuro-symbolic KB in the form of a “fact memory”. Each element of the fact memory is formed from a triple of vectors, where each vector corresponds to a KB entity or relation. Our LM improves performance on knowledge-intensive question-answering tasks, sometimes dramatically, including a 27 point increase in one setting of WebQuestionsSP over a state-of-the-art open-book model, despite using 5% of the parameters. Most interestingly, we demonstrate that the model can be modified, without any re-training, by updating the fact memory.

Cite

CITATION STYLE

APA

Verga, P., Sun, H., Soares, L. B., & Cohen, W. W. (2021). Adaptable and Interpretable Neural Memory Over Symbolic Knowledge. In NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 3678–3691). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.naacl-main.288

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free