Architecture for classifier combination using entropy measures

6Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we emphasize the need for a general theory of combination. Presently, most systems combine recognizers in an ad hoc manner. Recognizers can be combined in series and/or in parallel. Empirical methods can become extremely time consuming, given the very large number of combination possibilities. We have developed a method of systematically arriving at the optimal architecture for combination of classifiers that can include both parallel and serial methods. Our focus in this paper, however, will be on serial methods. We also derive some theoretical results to lay the foundation for our experiments. We show how a greedy algorithm that strives for entropy reduction at every stage leads to results superior to combination methods which are ad hoc. In our experiments we have seen an advantage of about 5% in certain cases. © Springer-Verlag Berlin Heidelberg 2000.

Cite

CITATION STYLE

APA

Ianakiev, K., & Govindaraju, V. (2000). Architecture for classifier combination using entropy measures. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1857 LNCS, pp. 340–350). Springer Verlag. https://doi.org/10.1007/3-540-45014-9_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free