Approximation and Learning of Convex Superpositions

26Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We present a fairly general method for constructing classes of functions of finite scale-sensitive dimension (the scale-sensitive dimension is a generalization of the Vapnik-Chervonenkis dimension to real-valued functions). The construction is as follows: start from a class F of functions of finite VC dimension, take the convex hull coF of F, and then take the closure coF of coF in an appropriate sense. As an example, we study in more detail the case where F is the class of threshold functions. It is shown that coF includes two important classes of func-tions: • neural networks with one hidden layer and bounded output weights; • the so-called Γ class of Barron, which was shown to satisfy a number of interesting approximation and closure properties. We also give an integral representation in the form of a "continuous neural network" which generalizes Barron's. It is shown that the existence of an integral representation is equivalent to both L2 and L∞ approximability. A preliminary version of this paper was presented at EuroCOLT'95. The main difference with the conference version is the addition of Theorem 7, where we show that a key topological result fails when the VC dimension hypothesis is removed. © 1997 Academic Press.

Cite

CITATION STYLE

APA

Gurvits, L., & Koiran, P. (1997). Approximation and Learning of Convex Superpositions. Journal of Computer and System Sciences, 55(1), 161–170. https://doi.org/10.1006/jcss.1997.1506

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free