Effective dimensions of hierarchical latent class models

8Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

Hierarchical latent class (HLC) models are tree-structured Bayesian networks where leaf nodes are observed while internal nodes are latent. There are no theoretically well justified model selection criteria for HLC models in particular and Bayesian networks with latent nodes in general. Nonetheless, empirical studies suggest that the BIC score is a reasonable criterion to use in practice for learning HLC models. Empirical studies also suggest that sometimes model selection can be improved if standard model dimension is replaced with effective model dimension in the penalty term of the BIC score. Effective dimensions are difficult to compute. In this paper, we prove a theorem that relates the effective dimension of an HLC model to the effective dimensions of a number of latent class models. The theorem makes it computationally feasible to compute the effective dimensions of large HLC models. The theorem can also be used to compute the effective dimensions of general tree models. © 2004 AI Access Foundation. All rights reserved.

Cite

CITATION STYLE

APA

Zhang, N. L., & Kočka, T. (2004). Effective dimensions of hierarchical latent class models. Journal of Artificial Intelligence Research. AI Access Foundation. https://doi.org/10.1613/jair.1311

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free