Fast approximate inference in hybrid Bayesian networks using dynamic discretisation

N/ACitations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider inference in a Bayesian network that can consist of a mix of discrete and continuos variables. It is well known that this is a task that cannot be solved in general using a standard inference algorithms based on the junction-tree. A common solution to this problem is to discretise the continuous variables to obtain a fully discrete model, in which standard inference can be performed. The most efficient discretisation procedure in terms of cost of inference is known as dynamic discretisation, and was published by Kozlov and Koller in the late 90's. In this paper we discuss an already published simplification to that algorithm by Neil et al. The simplification is in practise orders of magnitudes faster than Kozlov and Koller's technique, but potentially at the cost of some lack of precision. We consider the mathematical properties of Neil et al.'s algorithm, and challenge it by constructing models that are particularly difficult for that method. Some simple modifications to the core algorithm are proposed, and the empirical results are very promising, indicating that the simplified procedure is feasible also for very challenging problems. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Langseth, H., Marquez, D., & Neil, M. (2013). Fast approximate inference in hybrid Bayesian networks using dynamic discretisation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7930 LNCS, pp. 225–234). https://doi.org/10.1007/978-3-642-38637-4_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free