Use of explanation trees to describe the state space of a probabilistic-based abduction problem

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This chapter presents a new approach to the problem of obtaining the most probable explanations given a set of observations in a Bayesian network. The method provides a set of possibilities ranked by their probabilities. The main novelties are that the level of detail of each one of the explanations is not uniform (with the idea of being as simple as possible in each case), the explanations are mutually exclusive, and the number of required explanations is not fixed (it depends on the particular case we are solving). Our goals are achieved by means of the construction of the so called explanation tree which can have asymmetric branching and that will determine the different possibilities. This chapter describes the procedure for its computation based on information theory criteria and shows its behaviour in some examples. To test the procedure we have used a couple of examples that can be intuitively interpreted and understood. Moreover, we have carried out a set of experiments to make a comparison with other existing abductive techniques that were designed with goals similar to those we pursue. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Flores, M. J., Gámez, J. A., & Moral, S. (2008). Use of explanation trees to describe the state space of a probabilistic-based abduction problem. Studies in Computational Intelligence, 156, 251–280. https://doi.org/10.1007/978-3-540-85066-3_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free