Combining neural networks based on Dempster-Shafer theory for classifying data with imperfect labels

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper addresses the supervised learning in which the class membership of training data are subject to uncertainty. This problem is tackled in the framework of the Dempster-Shafer theory. In order to properly estimate the class labels, different types of features are extracted from the data. The initial labels of the training data are ignored and by utilizing the main classes' prototypes, each training pattern, in each of the feature spaces, is reassigned to one class or a subset of the main classes based on the level of ambiguity concerning its class label. Multilayer perceptrons neural network is used as base classifier and for a given test sample, its outputs are considered as basic belief assignment. Finally, the decisions of the base classifiers are combined using Dempster's rule of combination. Experiments with artificial and real data demonstrate that considering ambiguity in class labels can provide better results than classifiers trained with imperfect labels. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Tabassian, M., Ghaderi, R., & Ebrahimpour, R. (2010). Combining neural networks based on Dempster-Shafer theory for classifying data with imperfect labels. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6438 LNAI, pp. 233–244). https://doi.org/10.1007/978-3-642-16773-7_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free