Abstract
Causally insufficient structures (models with latent or hidden variables, or with confounding etc.) of joint probability distributions have been subject of intense study not only in statistics, but also in various AI systems. In AI, belief networks, being representations of joint probability distribution with an underlying directed acyclic graph structure, Eire paid special attention due to the fact that efficient reasoning (uncertainty propagation) methods have been developed for belief network structures. Algorithms have been therefore developed to acquire the belief network structure from data. As artifacts due to variable hiding negatively influence the performance of derived belief networks, models with latent variables have been studied and several algorithms for learning belief network structure under causal insufficiency have also been developed. Regrettably, some of them are known already to be erroneous (e.g. IC algorithm of [12]). This paper is devoted to another algorithm, the Feist Causal Inference (FCI) Algorithm of [17]. It is proven by a specially constructed example that this algorithm, as it stands in [17], is also erroneous. Fundamental reason for failure of this algorithm is the temporary introduction of non-real links between nodes of the network with the intention of later removal. While for trivial dependency structures these non-real links may be actually removed, this may not be the case for complex ones, e.g. For the case described in this paper. A remedy of this failure is proposed.
Author supplied keywords
Cite
CITATION STYLE
Kłopotek, M. A. (2000). On a deficiency of the FCI algorithm learning Bayesian networks from data. Demonstratio Mathematica, 33(1), 181–194. https://doi.org/10.1515/dema-2000-0122
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.