Learning of latent class models by splitting and merging components

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A problem in learning latent class models (also known as naive Bayes models with a hidden class variable) is that local maximum parameters are often found. The standard solution of having many random starting points for the EM algorithm is often too expensive computationally. We propose to obtain better starting points for EM by splitting and merging components in models with already estimated parameters. This way we extend our previous work, where only a component splitting was used and the need for a component merging was noticed. We discuss theoretical properties of a component merging. We propose an algorithm that learns latent class models by performing component splitting and merging. In the experiments with real-world data sets, our algorithm in a majority of cases performs better than the standard algorithm. A promising extension would be to apply our method for learning cardinalities and parameters of hidden variables in Bayesian networks. © 2007 Springer.

Cite

CITATION STYLE

APA

Karčiauskas, G. (2007). Learning of latent class models by splitting and merging components. Studies in Fuzziness and Soft Computing, 213, 235–251. https://doi.org/10.1007/978-3-540-68996-6_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free