The Complexity of Learning

  • Rojas R
N/ACitations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the previous chapters we extensively discussed the properties of multilayer neural networks and some learning algorithms. Although it is now clear that backpropagation is a statistical method for function approximation, two ques-tions remain open: firstly, what kind of functions can be approximated using multilayer neural networks, and secondly, what is the expected computational complexity of the learning problem. We deal with both issues in this chapter. 10.1.1 Learning algorithms for multilayer networks The backpropagation algorithm has the disadvantage that it becomes very slow in flat regions of the error function. In that case the algorithm should use a larger iteration step. However, this is precluded by the length of the gradient, which is too small in these problematic regions. Gradient descent can be slowed arbitrarily in these cases. We may think that this kind of problem could be solved by switching the learning algorithm. Maybe there is a learning method capable of finding a solution in a number of steps that is polynomial in the number of weights in the network. But this is not so. We show in this chapter that finding the appropriate weights for a learning problem consisting of just one input-output pair is computationally hard. This means that this task belongs to the class of NP-complete problems, for which no polynomial time algorithm is known, because it probably does not exist. 10.1.2 Hilbert's problem and computability

Cite

CITATION STYLE

APA

Rojas, R. (1996). The Complexity of Learning. In Neural Networks (pp. 263–285). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-61068-4_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free