Bayesian optimization of the PC algorithm for learning Gaussian Bayesian networks

5Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The PC algorithm is a popular method for learning the structure of Gaussian Bayesian networks. It carries out statistical tests to determine absent edges in the network. It is hence governed by two parameters: (i) The type of test, and (ii) its significance level. These parameters are usually set to values recommended by an expert. Nevertheless, such an approach can suffer from human bias, leading to suboptimal reconstruction results. In this paper we consider a more principled approach for choosing these parameters in an automatic way. For this we optimize a reconstruction score evaluated on a set of different Gaussian Bayesian networks. This objective is expensive to evaluate and lacks a closed-form expression, which means that Bayesian optimization (BO) is a natural choice. BO methods use a model to guide the search and are hence able to exploit smoothness properties of the objective surface. We show that the parameters found by a BO method outperform those found by a random search strategy and the expert recommendation. Importantly, we have found that an often overlooked statistical test provides the best over-all reconstruction results.

Cite

CITATION STYLE

APA

Córdoba, I., Garrido-Merchán, E. C., Hernández-Lobato, D., Bielza, C., & Larrañaga, P. (2018). Bayesian optimization of the PC algorithm for learning Gaussian Bayesian networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11160 LNAI, pp. 44–54). Springer Verlag. https://doi.org/10.1007/978-3-030-00374-6_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free