Improved Gaussian Mixture Density Estimates Using Bayesian Penalty Terms and Network Averaging

37Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

We compare two regularization methods which can be used to improve the generalization capabilities of Gaussian mixture density estimates. The first method uses a Bayesian prior on the parameter space. We derive EM (Expectation Maximization) update rules which maximize the a posterior parameter probability. In the second approach we apply ensemble averaging to density estimation. This includes Breiman's "bagging", which recently has been found to produce impressive results for classification networks.

Cite

CITATION STYLE

APA

Ormoneit, D., & Tresp, V. (1995). Improved Gaussian Mixture Density Estimates Using Bayesian Penalty Terms and Network Averaging. In NIPS 1995: Proceedings of the 8th International Conference on Neural Information Processing Systems (pp. 542–548). MIT Press Journals.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free