CALM-Bench: A Multi-task Benchmark for Evaluating Causality Aware Language Models

3Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Causal reasoning is a critical component of human cognition and is required across a range of question-answering (QA) tasks (such as abductive reasoning, commonsense QA, and procedural reasoning). Research on causal QA has been underdefined, task-specific, and limited in complexity. Recent advances in foundation language models (such as BERT, ERNIE, and T5) have shown the efficacy of pre-trained models across diverse QA tasks. However, there is limited research exploring the causal reasoning capabilities of those language models and no standard evaluation benchmark. To unify causal QA research, we propose CALM-Bench, a multi-task benchmark for evaluating causality-aware language models (CALM). We present a standardized definition of causal QA tasks and show empirically that causal reasoning can be generalized and transferred across different QA tasks. Additionally, we share a strong multi-task baseline model which outperforms single-task fine-tuned models on the CALM-Bench tasks.

Cite

CITATION STYLE

APA

Dalal, D., Arcan, M., & Buitelaar, P. (2023). CALM-Bench: A Multi-task Benchmark for Evaluating Causality Aware Language Models. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Findings of EACL 2023 (pp. 296–311). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-eacl.23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free