Fusing Context Into Knowledge Graph for Commonsense Question Answering

52Citations
Citations of this article
133Readers
Mendeley users who have this article in their library.

Abstract

Commonsense question answering (QA) requires a model to grasp commonsense and factual knowledge to answer questions about world events. Many prior methods couple language modeling with knowledge graphs (KG). However, although a KG contains rich structural information, it lacks the context to provide a more precise understanding of the concepts. This creates a gap when fusing knowledge graphs into language modeling, especially when there is insufficient labeled data. Thus, we propose to employ external entity descriptions to provide contextual information for knowledge understanding. We retrieve descriptions of related concepts from Wiktionary and feed them as additional input to pre-trained language models. The resulting model achieves state-of-the-art result in the CommonsenseQA dataset and the best result among non-generative models in OpenBookQA.

Cite

CITATION STYLE

APA

Xu, Y., Zhu, C., Xu, R., Liu, Y., Zeng, M., & Huang, X. (2021). Fusing Context Into Knowledge Graph for Commonsense Question Answering. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 1201–1207). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.102

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free