A Bayesian Approach to Causal Discovery

  • Heckerman D
  • Meek C
  • Cooper G
N/ACitations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We examine the Bayesian approach to the discovery of causal DAG models and compare it to the constraint-based approach. Both approaches rely on the Causal Markov condition, but the two differ significantly in theory and practice. An important difference between the approaches is that the constraint-based approach uses categorical information about conditional-independence constraints in the domain, whereas the Bayesian approach weighs the degree to which such constraints hold. As a result, the Bayesian approach has three distinct advantages over its constraint-based counterpart. One, conclusions derived from the Bayesian approach are not susceptible to incorrect categorical decisions about independence facts that can occur with data sets of finite size. Two, using the Bayesian approach, finer distinctions among model structures—both quantitative and qualitative—can be made. Three, information from several models can be combined to make better inferences and to better account for modeling uncertainty. In addition to describing the general Bayesian approach to causal discovery, we review approximation methods for missing data and hidden variables, and illustrate differences between the Bayesian and constraint-based methods using artificial and real examples.

Cite

CITATION STYLE

APA

Heckerman, D., Meek, C., & Cooper, G. (2006). A Bayesian Approach to Causal Discovery. In Innovations in Machine Learning (pp. 1–28). Springer-Verlag. https://doi.org/10.1007/3-540-33486-6_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free