A general framework for dimensionality reduction for large data sets

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With electronic data increasing dramatically in almost all areas of research, a plethora of new techniques for automatic dimensionality reduction and data visualization has become available in recent years. These offer an interface which allows humans to rapidly scan through large volumes of data. With data sets becoming larger and larger, however, the standard methods can no longer be applied directly. Random subsampling or prior clustering still being one of the most popular solutions in this case, we discuss a principled alternative and formalize the approaches under a general perspectives of dimensionality reduction as cost optimization. We have a first look at the question whether these techniques can be accompanied by theoretical guarantees. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Hammer, B., Biehl, M., Bunte, K., & Mokbel, B. (2011). A general framework for dimensionality reduction for large data sets. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6731 LNCS, pp. 277–287). https://doi.org/10.1007/978-3-642-21566-7_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free