Knowledge-Augmented Language Models for Cause-Effect Relation Classification

14Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Previous studies have shown the efficacy of knowledge augmentation methods in pretrained language models. However, these methods behave differently across domains and downstream tasks. In this work, we investigate the augmentation of pretrained language models with knowledge graph data in the cause-effect relation classification and commonsense causal reasoning tasks. After automatically verbalizing triples in ATOMIC2020, a wide coverage commonsense reasoning knowledge graph, we continually pretrain BERT and evaluate the resulting model on cause-effect pair classification and answering commonsense causal reasoning questions. Our results show that a continually pretrained language model augmented with commonsense reasoning knowledge outperforms our baselines on two commonsense causal reasoning benchmarks, COPA and BCOPA-CE, and a Temporal and Causal Reasoning (TCR) dataset, without additional improvement in model architecture or using quality-enhanced data for fine-tuning.

Cite

CITATION STYLE

APA

Hosseini, P., Broniatowski, D. A., & Diab, M. (2022). Knowledge-Augmented Language Models for Cause-Effect Relation Classification. In CSRR 2022 - 1st Workshop on Commonsense Representation and Reasoning (pp. 43–48). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.csrr-1.6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free