Global least-squares vs. EM training for the gaussian mixture of experts

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Since the introduction of the mixture of experts models and the EM algorithm for training them, maximum likelihood training of such networks has been shown to be a very useful and powerful tool for function estimation and prediction. A similar architecture is derived by other researchers from the application of fuzzy rules. Such systems are often trained by a straightforward global error minimisation procedure. This paper argues that in certain situations global optimisation is the most appropriate approach to take despite its apparent lack of statistical justification compared to the maximum likelihood approach. Moreover a composition of the two approaches often gives the minimal error on both the training and validation sets.

Cite

CITATION STYLE

APA

Bradshaw, N. P., Duchâteau, A., & Bersini, H. (1997). Global least-squares vs. EM training for the gaussian mixture of experts. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1327, pp. 295–300). Springer Verlag. https://doi.org/10.1007/bfb0020170

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free