PaCo: Preconditions Attributed to Commonsense Knowledge

2Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Humans can seamlessly reason with circumstantial preconditions of commonsense knowledge. We understand that a glass is used for drinking water, unless the glass is broken or the water is toxic. Despite state-of-the-art (SOTA) language models' (LMs) impressive performance on inferring commonsense knowledge, it is unclear whether they understand the circumstantial preconditions. To address this gap, we propose a novel challenge of reasoning with circumstantial preconditions. We collect a dataset, called PaCo, consisting of 12.4 thousand preconditions of commonsense statements expressed in natural language. Based on this dataset, we create three canonical evaluation tasks and use them to examine the capability of existing LMs to understand situational preconditions. Our results reveal a 10-30% gap between machine and human performance on our tasks, which shows that reasoning with preconditions is an open challenge.

Cite

CITATION STYLE

APA

Qasemi, E., Ilievski, F., Chen, M., & Szekely, P. (2022). PaCo: Preconditions Attributed to Commonsense Knowledge. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 6810–6825). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.505

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free