Approximation and learning of convex superpositions

10Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a fairly general method for constructing classes of functions of finite scale-sensitive dimension (the scale-sensitive dimension is a generalization of the Vapnik-Chervonenkis dimension to real-valued functions). The construction is as follows: start from a class F of functions of finite VC dimension, take the convex hull coF of F, and then take the closure coF of coF in an appropriate sense. As an example, we study in more detail the case where F is the class of threshold functions. It is shown that coF includes two important classes of functions: neural networks with one hidden layer and bounded output weights; the so-called r class of Barron, which was shown to satisfy a number of interesting approximation and closure properties. We also give an integral representation in the form of a "continuous neural network" which generalizes Barron's. It is shown that the existence of an integral representation is equivalent to both I2 and L°° approx-imability.

Cite

CITATION STYLE

APA

Gurvits, L., & Koiran, P. (1995). Approximation and learning of convex superpositions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 904, pp. 222–236). Springer Verlag. https://doi.org/10.1007/3-540-59119-2_180

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free