Markov Network Structure Learning: A Randomized Feature Generation Approach

N/ACitations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

The structure of a Markov network is typically learned in one of two ways. The first approach is to treat this task as a global search problem. However, these algorithms are slow as they require running the expensive operation of weight (i.e., parameter) learning many times. The second approach involves learning a set of local models and then combining them into a global model. However, it can be computationally expensive to learn the local models for datasets that contain a large number of variables and/or examples. This paper pursues a third approach that views Markov network structure learning as a feature generation problem. The algorithm combines a data-driven, specific-to-general search strategy with randomization to quickly generate a large set of candidate features that all have support in the data. It uses weight learning, with L1 regularization, to select a subset of generated features to include in the model. On a large empirical study, we find that our algorithm is equivalently accurate to other state-of-the-art methods while exhibiting a much faster run time.

Cite

CITATION STYLE

APA

Van Haaren, J., & Davis, J. (2012). Markov Network Structure Learning: A Randomized Feature Generation Approach. In Proceedings of the 26th AAAI Conference on Artificial Intelligence, AAAI 2012 (pp. 1148–1154). AAAI Press. https://doi.org/10.1609/aaai.v26i1.8315

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free