We show that for any data set in any metric space, it is possible to construct a hierarchical clustering with the guarantee that for every k, the induced k-clustering has cost at most eight times that of the optimal k-clustering. Here the cost of a clustering is taken to be the maximum radius of its clusters. Our algorithm is similar in simplicity and efficiency to common heuristics for hierarchical clustering, and we show that these heuristics have poorer approximation factors.
CITATION STYLE
Dasgupta, S. (2002). Performance guarantees for hierarchical clustering. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2375, pp. 351–363). Springer Verlag. https://doi.org/10.1007/3-540-45435-7_24
Mendeley helps you to discover research relevant for your work.