Confidence sets for classification

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Conformal predictors, introduced by [13], serve to build prediction intervals by exploiting a notion of conformity of the new data point with previously observed data. In the classification problem, conformal predictor may respond to the problem of classification with reject option. In the present paper, we propose a novel method of construction of confidence sets, inspired both by conformal prediction and by classification with reject option. An important aspect of these confidence sets is that, when there are several observations to label, they control the proportion of the data we want to label. Moreover, we introduce a notion of risk adapted to classification with reject option. We show that for this risk, the confidence set risk converges to the risk of the confidence set based on the Bayes classifier.

Cite

CITATION STYLE

APA

Denis, C., & Hebiri, M. (2015). Confidence sets for classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9047, pp. 301–312). Springer Verlag. https://doi.org/10.1007/978-3-319-17091-6_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free