Fast learning of relational dependency networks

7Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

A relational dependency network (RDN) is a directed graphical model widely used for multi-relational data. These networks allow cyclic dependencies, necessary to represent relational auto-correlations. We describe an approach for learning both the RDN’s structure and its parameters, given an input relational database: First learn a Bayesian network (BN), then transform the Bayesian network to an RDN. Thus fast Bayesian network learning translates into fast RDN learning. The BN-to-RDN transform comprises a simple, local adjustment of the Bayesian network structure and a closed-form transform of the Bayesian network parameters. This method can learn an RDN for a dataset with a million tuples in minutes. We empirically compare our approach to a state-of-the-art RDN learning approach that applies functional gradient boosting, using six benchmark datasets. Learning RDNs via BNs scales much better to large datasets than learning RDNs with current boosting methods.

Cite

CITATION STYLE

APA

Schulte, O., Qian, Z., Kirkpatrick, A. E., Yin, X., & Sun, Y. (2016). Fast learning of relational dependency networks. Machine Learning, 103(3), 377–406. https://doi.org/10.1007/s10994-016-5557-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free