Testing the manifold hypothesis

  • Fefferman C
  • Mitter S
  • Narayanan H
252Citations
Citations of this article
272Readers
Mendeley users who have this article in their library.

Abstract

The hypothesis that high dimensional data tend to lie in the vicinity of a low dimensional manifold is the basis of manifold learning. The goal of this paper is to develop an algorithm (with accompanying complexity guarantees) for fitting a manifold to an unknown probability distribution supported in a separable Hilbert space, only using i.i.d samples from that distribution. More precisely, our setting is the following. Suppose that data are drawn independently at random from a probability distribution $P$ supported on the unit ball of a separable Hilbert space $H$. Let $G(d, V, \tau)$ be the set of submanifolds of the unit ball of $H$ whose volume is at most $V$ and reach (which is the supremum of all $r$ such that any point at a distance less than $r$ has a unique nearest point on the manifold) is at least $\tau$. Let $L(M, P)$ denote mean-squared distance of a random point from the probability distribution $P$ to $M$. We obtain an algorithm that tests the manifold hypothesis in the following sense. The algorithm takes i.i.d random samples from $P$ as input, and determines which of the following two is true (at least one must be): (a) There exists $M \in G(d, CV, \frac{\tau}{C})$ such that $L(M, P) \leq C \epsilon.$ (b) There exists no $M \in G(d, V/C, C\tau)$ such that $L(M, P) \leq \frac{\epsilon}{C}.$ The answer is correct with probability at least $1-\delta$.

Cite

CITATION STYLE

APA

Fefferman, C., Mitter, S., & Narayanan, H. (2016). Testing the manifold hypothesis. Journal of the American Mathematical Society, 29(4), 983–1049. https://doi.org/10.1090/jams/852

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free