Unsupervised learning of binary vectors: a Gaussian scenario

  • Copelli M
  • Van Den Broeck C
  • 7


    Mendeley users who have this article in their library.
  • 1


    Citations of this article.


We study a model of unsupervised learning where the real-valued data vectors are isotropically distributed, except for a single symmetry breaking binary direction $\bm{B}\in\{-1,+1\}^{N}$, onto which the projections have a Gaussian distribution. We show that a candidate vector $\bm{J}$ undergoing Gibbs learning in this discrete space, approaches the perfect match $\bm{J}=\bm{B}$ exponentially. Besides the second order ``retarded learning'' phase transition for unbiased distributions, we show that first order transitions can also occur. Extending the known result that the center of mass of the Gibbs ensemble has Bayes-optimal performance, we show that taking the sign of the components of this vector leads to the vector with optimal performance in the binary space. These upper bounds are shown not to be saturated with the technique of transforming the components of a special continuous vector, except in asymptotic limits and in a special linear case. Simulations are presented which are in excellent agreement with the theoretical results.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document


  • Mauro Copelli

  • Christian Van Den Broeck

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free