A Markov blanket based strategy to optimize the induction of Bayesian classifiers when using conditional independence learning algorithms

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A Bayesian Network (BN) is a multivariate joint probability distribution graphical representation that can be induced from data. The induction of a BN is a NP problem. Two main approaches can be used for inducing a BN from data, namely, Conditional Independence (CI) and the Heuristic Search (HS) based algorithms. When a BN is induced for classification purposes (Bayesian Classifier - BC), it is possible to impose some specific constraints aiming at an increase in computational efficiency. In this paper a new CI based algorithm (MarkovPC) to induce BCs from data is proposed. MarkovPC uses the Markov Blanket concept in order to impose some constraints and optimize the traditional PC algorithm. Experiments performed with ALARM BN, as well as other UCI and artificial domains revealed that MarkovPC tends to execute fewer comparisons than the traditional PC. The experiments also show that the MarkovPC produces competitive classification rates when compared with both, PC and Naïve Bayes. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

De Galvão, S. D. C. O., & Hruschka, E. R. (2007). A Markov blanket based strategy to optimize the induction of Bayesian classifiers when using conditional independence learning algorithms. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4654 LNCS, pp. 355–364). https://doi.org/10.1007/978-3-540-74553-2_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free