The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter

  • Castelli V
  • Cover T
  • 25


    Mendeley users who have this article in their library.
  • 106


    Citations of this article.


We observe a training set Q composed of l labeled samples
{(X1,θ1),...,(Xl, θl
)} and u unlabeled samples {X1',...,Xu'}.
The labels θi are independent random variables
satisfying Pr{θi=1}=η,
Pr{θi=2}=1-η. The labeled observations Xi
are independently distributed with conditional density
fθi(·) given θi. Let (X0
,θ0) be a new sample, independently distributed
as the samples in the training set. We observe X0 and we wish
to infer the classification θ0. In this paper we first
assume that the distributions f1(·) and
f2(·) are given and that the mixing parameter is
unknown. We show that the relative value of labeled and unlabeled
samples in reducing the risk of optimal classifiers is the ratio of the
Fisher informations they carry about the parameter η. We then assume
that two densities g1(·) and g2(·)
are given, but we do not know whether g1(·)=f1
(·) and g2(·)=f2(·) or
if the opposite holds, nor do we know η. Thus the learning problem
consists of both estimating the optimum partition of the observation
space and assigning the classifications to the decision regions. Here,
we show that labeled samples are necessary to construct a classification
rule and that they are exponentially more valuable than unlabeled

Author-supplied keywords

  • Asymptotic theory
  • Bayesian method
  • Labeled and unlabeled samples
  • Laplace's integral
  • Pattern recognition
  • Supervised learning
  • Unsupervised learning

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Get full text


  • Vittorio Castelli

  • Thomas M. Cover

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free