Enhancing Model Robustness and Fairness with Causality: A Regularization Approach

6Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent work has raised concerns on the risk of spurious correlations and unintended biases in statistical machine learning models that threaten model robustness and fairness. In this paper, we propose a simple and intuitive regularization approach to integrate causal knowledge during model training and build a robust and fair model by emphasizing causal features and de-emphasizing spurious features. Specifically, we first manually identify causal and spurious features with principles inspired from the counterfactual framework of causal inference. Then, we propose a regularization approach to penalize causal and spurious features separately. By adjusting the strength of the penalty for each type of feature, we build a predictive model that relies more on causal features and less on non-causal features. We conduct experiments to evaluate model robustness and fairness on three datasets with multiple metrics. Empirical results show that the new models built with causal awareness significantly improve model robustness with respect to counterfactual texts and model fairness with respect to sensitive attributes.

Cite

CITATION STYLE

APA

Wang, Z., Shu, K., & Culotta, A. (2021). Enhancing Model Robustness and Fairness with Causality: A Regularization Approach. In 1st Workshop on Causal Inference and Natural Language Processing, Proceedings of CI+NLP 2021 (pp. 33–43). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.cinlp-1.3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free