CRAB: Assessing the Strength of Causal Relationships Between Real-World Events

3Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Understanding narratives requires reasoning about the cause-and-effect relationships between events mentioned in the text. While existing foundation models yield impressive results in many NLP tasks requiring reasoning, it is unclear whether they understand the complexity of the underlying network of causal relationships of events in narratives. In this work, we present CRAB, a new Causal Reasoning Assessment Benchmark designed to evaluate causal understanding of events in real-world narratives. CRAB contains fine-grained, contextual causality annotations for ∼ 2.7K pairs of real-world events that describe various newsworthy event timelines (e.g., the acquisition of Twitter by Elon Musk). Using CRAB, we measure the performance of several large language models, demonstrating that most systems achieve poor performance on the task. Motivated by classical causal principles, we also analyze the causal structures of groups of events in CRAB, and find that models perform worse on causal reasoning when events are derived from complex causal structures compared to simple linear causal chains. We make our dataset and code available to the research community.

Cite

CITATION STYLE

APA

Romanou, A., Montariol, S., Paul, D., Laugier, L., Aberer, K., & Bosselut, A. (2023). CRAB: Assessing the Strength of Causal Relationships Between Real-World Events. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 15198–15216). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.940

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free