In this paper, we propose a general framework for mitigating the disparities of the predicted classes with respect to secondary attributes within the data (e.g., race, gender etc.). Our proposed method involves learning a multi-objective function that in addition to learning the primary objective of predicting the primary class labels from the data, also employs a clustering-based heuristic to minimize the disparities of the class label distribution with respect to the cluster memberships, with the assumption that each cluster should ideally map to a distinct combination of attribute values. Experiments demonstrate effective mitigation of cognitive biases on a benchmark dataset without the use of annotations of secondary attribute values (the zero-shot case) or with the use of a small number of attribute value annotations (the few-shot case).
CITATION STYLE
Mondal, I., Sen, P., & Ganguly, D. (2021). Multi-objective Few-shot Learning for Fair Classification. In International Conference on Information and Knowledge Management, Proceedings (pp. 3338–3342). Association for Computing Machinery. https://doi.org/10.1145/3459637.3482146
Mendeley helps you to discover research relevant for your work.