Improving Unsupervised Commonsense Reasoning Using Knowledge-Enabled Natural Language Inference

7Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent methods based on pre-trained language models have shown strong supervised performance on commonsense reasoning. However, they rely on expensive data annotation and time-consuming training. Thus, we focus on unsupervised commonsense reasoning. We show the effectiveness of using a common framework, Natural Language Inference (NLI), to solve diverse commonsense reasoning tasks. By leveraging transfer learning from large NLI datasets, and injecting crucial knowledge from commonsense sources such as ATOMIC 2020 and ConceptNet, our method achieved state-of-the-art unsupervised performance on two commonsense reasoning tasks: WinoWhy and CommonsenseQA. Further analysis demonstrated the benefits of multiple categories of knowledge, but problems about quantities and antonyms are still challenging.

Cite

CITATION STYLE

APA

Huang, C., He, W., & Liu, Y. (2021). Improving Unsupervised Commonsense Reasoning Using Knowledge-Enabled Natural Language Inference. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 4875–4885). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.420

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free