Abstract
Recent works in artificial intelligence fairness attempt to mitigate discrimination by proposing constrained optimization programs that achieve parity for some fairness statistic. Most assume availability of the class label, which is impractical in many real-world applications such as precision medicine, actuarial analysis and recidivism prediction. Here we consider fairness in longitudinal right-censored environments, where the time to event might be unknown, resulting in censorship of the class label and inapplicability of existing fairness studies. We devise applicable fairness measures, propose a debiasing algorithm, and provide necessary theoretical constructs to bridge fairness with and without censorship for these important and socially-sensitive tasks. Our experiments on four censored datasets confirm the utility of our approach.
Cite
CITATION STYLE
Zhang, W., & Weiss, J. C. (2022). Longitudinal Fairness with Censorship. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 12235–12243). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i11.21484
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.