Causality-aware Concept Extraction based on Knowledge-guided Prompting

11Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Concepts benefit natural language understanding but are far from complete in existing knowledge graphs (KGs). Recently, pre-trained language models (PLMs) have been widely used in text-based concept extraction (CE). However, PLMs tend to mine the co-occurrence associations from massive corpus as pre-trained knowledge rather than the real causal effect between tokens. As a result, the pre-trained knowledge confounds PLMs to extract biased concepts based on spurious co-occurrence correlations, inevitably resulting in low precision. In this paper, through the lens of a Structural Causal Model (SCM), we propose equipping the PLM-based extractor with a knowledge-guided prompt as an intervention to alleviate concept bias. The prompt adopts the topic of the given entity from the existing knowledge in KGs to mitigate the spurious co-occurrence correlations between entities and biased concepts. Our extensive experiments on representative multilingual KG datasets justify that our proposed prompt can effectively alleviate concept bias and improve the performance of PLM-based CE models. The code has been released on https://github.com/siyuyuan/KPCE.

Cite

CITATION STYLE

APA

Yuan, S., Yang, D., Liu, J., Tian, S., Liang, J., Xiao, Y., & Xie, R. (2023). Causality-aware Concept Extraction based on Knowledge-guided Prompting. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 9255–9272). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.514

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free