On phase transitions in learning sparse networks

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper we study the identification of sparse interaction networks as a machine learning problem. Sparsity means that we are provided with a small data set and a high number of unknown components of the system, most of which are zero. Under these circumstances, a model needs to be learned that fits the underlying system, capable of generalization. This corresponds to the studentteacher setting in machine learning. In the first part of this paper we introduce a learning algorithm, based on L1 -minimization, to identify interaction networks from poor data and analyze its dynamics with respect to phase transitions. The efficiency of the algorithm is measured by the generalization error, which represents the probability that the student is a good fit to the teacher. In the second part of this paper we show that from a system with a specific system size value the generalization error of other system sizes can be estimated. A comparison with a set of simulation experiments show a very good fit. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Hollanders, G., Bex, G. J., Gyssens, M., Westra, R. L., & Tuyls, K. (2007). On phase transitions in learning sparse networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4701 LNAI, pp. 591–599). Springer Verlag. https://doi.org/10.1007/978-3-540-74958-5_57

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free