Approximation by neural networks with weights varying on a finite set of directions

Citations of this article
Mendeley users who have this article in their library.


Approximation properties of the MLP (multilayer feedforward perceptron) model of neural networks have been investigated in a great deal of works over the last 30 years. It has been shown that for a large class of activation functions, a neural network can approximate arbitrarily well any given continuous function. The most significant result on this problem belongs to Leshno, Lin, Pinkus and Schocken. They proved that the necessary and sufficient condition for any single hidden layer network to have the u.a.p. (universal approximation property) is that its activation function not be a polynomial. Some authors (White, Stinchcombe, Ito, and others) showed that a single hidden layer perceptron with some bounded weights can also have the u.a.p. Thus the weights required for u.a.p. are not necessary to be of an arbitrarily large magnitude. But what if they are too restricted? How can one learn approximation properties of networks with arbitrarily restricted set of weights? The current paper makes a first step in solving this general problem. We consider neural networks with sets of weights consisting of a finite number of directions. Our purpose is to characterize compact sets X in the d-dimensional space such that the network can approximate any continuous function over X. In a special case, when weights vary only on two directions, we give a lower bound for the approximation error and find a sufficient condition for the network to be a best approximation. © 2011 Elsevier Inc.




Ismailov, V. E. (2012). Approximation by neural networks with weights varying on a finite set of directions. Journal of Mathematical Analysis and Applications, 389(1), 72–83.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free