Clustering problems and clustering algorithms are often overly sensitive to the presence of outliers: even a handful of points can greatly affect the structure of the optimal solution and its cost. This is why many algorithms for robust clustering problems have been formulated in recent years. These algorithms discard some points as outliers, excluding them from the clustering. However, outlier selection can be unfair: some categories of input points may be disproportionately affected by the outlier removal algorithm. We study the problem of k-clustering with fair outlier removal and provide the first approximation algorithm for well-known clustering formulations, such as k-means and k-median. We analyze this algorithm and prove that it has strong theoretical guarantees. We complement this result with an empirical evaluation showing that, while standard methods for outlier removal have a disproportionate impact across categories of input points, our algorithm equalizes the impact while retaining strong experimental performances on multiple real - world datasets. We also show how the fairness of outlier removal can influence the performance of a downstream learning task. Finally, we provide a coreset construction, which makes our algorithm scalable to very large datasets.
CITATION STYLE
Almanza, M., Epasto, A., Panconesi, A., & Re, G. (2022). K-clustering with fair outliers. In WSDM 2022 - Proceedings of the 15th ACM International Conference on Web Search and Data Mining (pp. 5–15). Association for Computing Machinery, Inc. https://doi.org/10.1145/3488560.3498485
Mendeley helps you to discover research relevant for your work.