Retrieval Augmentation for Commonsense Reasoning: A Unified Approach

10Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.

Abstract

A common thread of retrieval-augmented methods in the existing literature focuses on retrieving encyclopedic knowledge, such as Wikipedia, which facilitates well-defined entity and relation spaces that can be modeled. However, applying such methods to commonsense reasoning tasks faces two unique challenges, i.e., the lack of a general large-scale corpus for retrieval and a corresponding effective commonsense retriever. In this paper, we systematically investigate how to leverage commonsense knowledge retrieval to improve commonsense reasoning tasks. We proposed a unified framework of Retrieval-Augmented Commonsense reasoning (called RACO), including a newly constructed commonsense corpus with over 20 million documents and novel strategies for training a commonsense retriever. We conducted experiments on four different commonsense reasoning tasks. Extensive evaluation results showed that our proposed RACO can significantly outperform other knowledge-enhanced method counterparts, achieving new SoTA performance on the CommonGen and CREAK2 leaderboards. Our code is available at https://github.com/wyu97/RACo.

Cite

CITATION STYLE

APA

Yu, W., Zhu, C., Zhang, Z., Wang, S., Zhang, Z., Fang, Y., & Jiang, M. (2022). Retrieval Augmentation for Commonsense Reasoning: A Unified Approach. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 4364–4377). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.294

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free