It is shown that high order feedforward neural nets of constant depth with piecewise polynomial activation functions and arbitrary real weights can be simulated for boolean inputs and outputs by neural nets of a somewhat larger size and depth with linear threshold gates and weights from {-1,0,1}. This provides the first known upper bound for the computational power and VC-dimension of the former type of neural nets. It is also shown that in the case of first order nets with piecewise linear activation functions one can replace arbitrary real weights by rational numbers with polynomi-Ally many bits, without changing the boolean function that is computed by the neural net. In order to prove these results we introduce two new methods for reducing nonlinear problems about weights in multi-layer neural nets to linear problems for a transformed set of parameters. In addition we improve the best known lower bound for the VC-dimension of a neural net with w weight s and gates that use the heaviside function (or other common activation functions such as a) from Ω(u;) to Ω(wlog w). This implies the somewhat surprising fact that the Baum-Haussler upper bound for the VC-dimension of a neural net with linear threshold gates is asymptotically optimal. Finally it is shown that neural nets with piecewise polynomial activation functions and a constant number of analog inputs are probably approximately leamable (in Valiant's model for PAC-learning).
CITATION STYLE
Maass, W. (1993). Bounds for the computational power and learning complexity of analog neural nets. In Proceedings of the Annual ACM Symposium on Theory of Computing (Vol. Part F129585, pp. 335–344). Association for Computing Machinery. https://doi.org/10.1145/167088.167193
Mendeley helps you to discover research relevant for your work.