Learning conditional linear Gaussian classifiers with probabilistic class labels

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We study the problem of learning Bayesian classifiers (BC) when the true class label of the training instances is not known, and is substituted by a probability distribution over the class labels for each instance. This scenario can arise, e.g., when a group of experts is asked to individually provide a class label for each instance. We particularize the generalized expectation maximization (GEM) algorithm in [1] to learn BCs with different structural complexities: naive Bayes, averaged one-dependence estimators or general conditional linear Gaussian classifiers. An evaluation conducted on eight datasets shows that BCs learned with GEM perform better than those using either the classical Expectation Maximization algorithm or potentially wrong class labels. BCs achieve similar results to the multivariate Gaussian classifier without having to estimate the full covariance matrices. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

López-Cruz, P. L., Bielza, C., & Larrañaga, P. (2013). Learning conditional linear Gaussian classifiers with probabilistic class labels. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8109 LNAI, pp. 139–148). https://doi.org/10.1007/978-3-642-40643-0_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free