NEGATER: Unsupervised Discovery of Negatives in Commonsense Knowledge Bases

8Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

Abstract

Codifying commonsense knowledge in machines is a longstanding goal of artificial intelligence. Recently, much progress toward this goal has been made with automatic knowledge base (KB) construction techniques. However, such techniques focus primarily on the acquisition of positive (true) KB statements, even though negative (false) statements are often also important for discriminative reasoning over commonsense KBs. As a first step toward the latter, this paper proposes NegatER, a framework that ranks potential negatives in commonsense KBs using a contextual language model (LM). Importantly, as most KBs do not contain negatives, NegatER relies only on the positive knowledge in the LM and does not require ground-truth negative examples. Experiments demonstrate that, compared to multiple contrastive data augmentation approaches, NegatER yields negatives that are more grammatical, coherent, and informative-leading to statistically significant accuracy improvements in a challenging KB completion task and confirming that the positive knowledge in LMs can be “re-purposed” to generate negative knowledge.

Cite

CITATION STYLE

APA

Safavi, T., Zhu, J., & Koutra, D. (2021). NEGATER: Unsupervised Discovery of Negatives in Commonsense Knowledge Bases. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 5633–5646). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.456

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free