Causal Direction of Data Collection Matters: Implications of Causal and Anticausal Learning for NLP

26Citations
Citations of this article
101Readers
Mendeley users who have this article in their library.

Abstract

The principle of independent causal mechanisms (ICM) states that generative processes of real world data consist of independent modules which do not influence or inform each other. While this idea has led to fruitful developments in the field of causal inference, it is not widely-known in the NLP community. In this work, we argue that the causal direction of the data collection process bears nontrivial implications that can explain a number of published NLP findings, such as differences in semi-supervised learning (SSL) and domain adaptation (DA) performance across different settings. We categorize common NLP tasks according to their causal direction and empirically assay the validity of the ICM principle for text data using minimum description length. We conduct an extensive meta-analysis of over 100 published SSL and 30 DA studies, and find that the results are consistent with our expectations based on causal insights. This work presents the first attempt to analyze the ICM principle in NLP, and provides constructive suggestions for future modeling choices.

Cite

CITATION STYLE

APA

Jin, Z., von Kügelgen, J., Ni, J., Vaidhya, T., Kaushal, A., Sachan, M., & Schölkopf, B. (2021). Causal Direction of Data Collection Matters: Implications of Causal and Anticausal Learning for NLP. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 9499–9513). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.748

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free