Abstract
tSNE and UMAP are popular dimensionality reduction algorithms due to their speed and interpretable low-dimensional embeddings. Despite their popularity, however, little work has been done to study their full span of differences. We theoretically and experimentally evaluate the space of parameters in both tSNE and UMAP and observe that a single one - the normalization - is responsible for switching between them. This, in turn, implies that a majority of the algorithmic differences can be toggled without affecting the embeddings. We discuss the implications this has on several theoretic claims behind UMAP, as well as how to reconcile them with existing tSNE interpretations. Based on our analysis, we provide a method (GDR) that combines previously incompatible techniques from tSNE and UMAP and can replicate the results of either algorithm. This allows our method to incorporate further improvements, such as an acceleration that obtains either method's outputs faster than UMAP. We release improved versions of tSNE, UMAP, and GDR that are fully plug-and-play with the traditional libraries.
Cite
CITATION STYLE
Draganov, A., Jørgensen, J., Scheel, K., Mottin, D., Assent, I., Berry, T., & Aslay, C. (2023). ActUp: Analyzing and Consolidating tSNE & UMAP. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2023-August, pp. 3651–3658). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2023/406
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.