Committee machines

76Citations
Citations of this article
78Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This chapter describes some of the most important architectures and algorithms for committee machines. We discuss three reasons for using committee machines. The first is that a committee can achieve a test set performance unobtainable by a single committee member. As typical representative approaches, we describe simple averaging, bagging, and boosting. Second, with committee machines, one obtains modular solutions, which is advantageous in many applications. The prime example given here is the mixture of experts (ME) approach, the goal of which is to autonomously break up a complex prediction task into subtasks which are modeled by the individual committee members. The third reason for using committee machines is a reduction in computational complexity. In the presented Bayesian committee machine, the training data set is partitioned into several smaller data sets, and the different committee members are trained on the different sets. Their predictions are then combined using a covariance-based weighting scheme. The computational complexity of the Bayesian committee machine approach grows only linearly with the size of the training data set, independent of the learning systems used as committee members.

Cite

CITATION STYLE

APA

Tresp, V. (2001). Committee machines. In Handbook of Neural Network Signal Processing (pp. 5-1-5–18). CRC Press. https://doi.org/10.1201/9781315220413-5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free