Scalable, efficient and correct learning of markov boundaries under the faithfulness assumption

18Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose an algorithm for learning the Markov boundary of a random variable from data without having to learn a complete Bayesian network. The algorithm is correct under the faithfulness assumption, scalable and data efficient. The last two properties are important because we aim to apply the algorithm to identify the minimal set of random variables that is relevant for probabilistic classification in databases with many random variables but few instances. We report experiments with synthetic and real databases with 37, 441 and 139352 random variables showing that the algorithm performs satisfactorily. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Peña, J. M., Björkegren, J., & Tegnér, J. (2005). Scalable, efficient and correct learning of markov boundaries under the faithfulness assumption. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3571 LNAI, pp. 136–147). Springer Verlag. https://doi.org/10.1007/11518655_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free