In the last decade, connectionist models have been proposed that can process structured information directly. These methods, which are based on the use of graphs for the representation of the data and the relationships within the data, are particularly suitable for handling relational learning tasks. In this paper, two recently proposed architectures of this kind, i.e. Graph Neural Networks (GNNs) and Relational Neural Networks (RelNNs), are compared and discussed, along with their corresponding learning schemes. The goal is to evaluate the performance of these methods on benchmarks that are commonly used by the relational learning community. Moreover, we also aim at reporting differences in the behavior of the two models, in order to gain insights on possible extensions of the approaches. Since RelNNs have been developed with the specific task of learning aggregate functions in mind, some experiments are run considering that particular task. In addition, we carry out more general experiments on the mutagenesis and the biodegradability datasets, on which several other relational learners have been evaluated. The experimental results are promising and suggest that RelNNs and GNNs can be a viable approach for learning on relational data. © The Author(s) 2010.
CITATION STYLE
Uwents, W., Monfardini, G., Blockeel, H., Gori, M., & Scarselli, F. (2011). Neural networks for relational learning: An experimental comparison. Machine Learning, 82(3), 315–349. https://doi.org/10.1007/s10994-010-5196-5
Mendeley helps you to discover research relevant for your work.