While harms of allocation have been increasingly studied as part of the subfield of algorithmic fairness, harms of representation have received considerably less attention. In this paper, we formalize two notions of stereotyping and show how they manifest in later allocative harms within the machine learning pipeline. We also propose mitigation strategies and demonstrate their effectiveness on synthetic datasets.
CITATION STYLE
Abbasi, M., Friedler, S. A., Scheidegger, C., & Venkatasubramanian, S. (2019). Fairness in representation: Quantifying stereotyping as a representational harm. In SIAM International Conference on Data Mining, SDM 2019 (pp. 801–809). Society for Industrial and Applied Mathematics Publications. https://doi.org/10.1137/1.9781611975673.90
Mendeley helps you to discover research relevant for your work.