Classifier conditional posterior probabilities

N/ACitations
Citations of this article
34Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Classifiers based on probability density estimates can be used to find posterior probabilities for the objects to be classified. These probabilities can be used for rejection or for combining classifiers. Posterior probabilities for other classifiers, however, have to be conditional for the classifier., i.e. they yield class probabilities for a given value of the classifier outcome instead for a given input feature vector. In this paper they are studied for a set of individual classifiers as well as for combination rules.

Cite

CITATION STYLE

APA

Duin, R. P. W., & Tax, D. M. J. (1998). Classifier conditional posterior probabilities. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1451, pp. 611–619). Springer Verlag. https://doi.org/10.1007/bfb0033285

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free