Efficiently approximating Markov tree bagging for high-dimensional density estimation

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We consider algorithms for generating Mixtures of Bagged Markov Trees, for density estimation. In problems defined over many variables and when few observations are available, those mixtures generally outperform a single Markov tree maximizing the data likelihood, but are far more expensive to compute. In this paper, we describe new algorithms for approximating such models, with the aim of speeding up learning without sacrificing accuracy. More specifically, we propose to use a filtering step obtained as a by-product from computing a first Markov tree, so as to avoid considering poor candidate edges in the subsequently generated trees. We compare these algorithms (on synthetic data sets) to Mixtures of Bagged Markov Trees, as well as to a single Markov tree derived by the classical Chow-Liu algorithm and to a recently proposed randomized scheme used for building tree mixtures. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Schnitzler, F., Ammar, S., Leray, P., Geurts, P., & Wehenkel, L. (2011). Efficiently approximating Markov tree bagging for high-dimensional density estimation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6913 LNAI, pp. 113–128). Springer Verlag. https://doi.org/10.1007/978-3-642-23808-6_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free