Add-Remove-or-Relabel: Practitioner-Friendly Bias Mitigation via Influential Fairness

4Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Commensurate with the rise in algorithmic bias research, myriad algorithmic bias mitigation strategies have been proposed in the literature. Nonetheless, many voice concerns about the lack of transparency that accompanies mitigation methods and the paucity of mitigation methods that satisfy protocol and data limitations of practitioners. Influence functions from robust statistics provide a novel opportunity to overcome both issues. Previous work demonstrates the power of influence functions to improve fairness outcomes. This work proposes a novel family of fairness solutions, coined influential fairness (IF), that is human-understandable and also agnostic to the underlying machine learning model and choice of fairness metric. We conduct an investigation of practitioner profiles and design mitigation methods for practitioners whose limitations discourage them from utilizing existing bias mitigation methods.

Cite

CITATION STYLE

APA

Richardson, B., Sattigeri, P., Wei, D., Ramamurthy, K. N., Varshney, K., Dhurandhar, A., & Gilbert, J. E. (2023). Add-Remove-or-Relabel: Practitioner-Friendly Bias Mitigation via Influential Fairness. In ACM International Conference Proceeding Series (pp. 736–752). Association for Computing Machinery. https://doi.org/10.1145/3593013.3594039

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free