The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter

150Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We observe a training set Q composed of l labeled samples {(X1, θ1), ⋯, (Xl, θl)} and u unlabeled samples {X′1, ⋯, X′u}. The labels θi are independent random variables satisfying Pr{θi = 1} = η, Pr{θi = 2} = 1 - η. The labeled observations Xi are independently distributed with conditional density fθi(·) given θi. Let (X0, θ0) be a new sample, independently distributed as the samples in the training set. We observe X0 and we wish to infer the classification θ0. In this paper we first assume that the distributions f1(·) and f2(·) are given and that the mixing parameter η is unknown. We show that the relative value of labeled and unlabeled samples in reducing the risk of optimal classifiers is the ratio of the Fisher informations they carry about the parameter η. We then assume that two densities g1(·) and g2(·) are given, but we do not know whether g1(·) = f1(·) and g2(·) = f2(·) or if the opposite holds, nor do we know η. Thus the learning problem consists of both estimating the optimum partition of the observation space and assigning the classifications to the decision regions. Here, we show that labeled samples are necessary to construct a classification rule and that they are exponentially more valuable than unlabeled samples. © 1996 IEEE.

Cite

CITATION STYLE

APA

Castelli, V., & Cover, T. M. (1996). The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter. IEEE Transactions on Information Theory, 42(6 PART 2), 2102–2117. https://doi.org/10.1109/18.556600

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free