The k-means problem is to compute a set of k centers (points) that minimizes the sum of squared distances to a given set of n points in a metric space. Arguably, the most common algorithm to solve it is k-means++ which is easy to implement and provides a provably small approximation error in time that is linear in n. We generalize k-means++ to support outliers in two sense (simultaneously): (i) nonmetric spaces, e.g., M-estimators, where the distance dist(p, x) between a point p and a center x is replaced by min {dist(p, x), c} for an appropriate constant c that may depend on the scale of the input. (ii) k-means clustering with m ≥ 1 outliers, i.e., where the m farthest points from any given k centers are excluded from the total sum of distances. This is by using a simple reduction to the (k + m)-means clustering (with no outliers).
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Statman, A., Rozenberg, L., & Feldman, D. (2020). K-means+++: Outliers-resistant clustering. Algorithms, 13(12). https://doi.org/10.3390/a13120311