We investigate, within the PAC learning model, the problem of learning nonoverlapping perceptron networks (also known as read-once formulas over a weighted threshold basis). These are loop-free neural nets in which each node has only one outgoing weight. We give a polynomial time algorithm that PAC learns any nonverlapping perceptron network using examples and membership queries. The algorithm is able to identify both the architecture and the weight values necessary to represent the function to be learned. Our results shed some light on the effect of the overlap on the complexity of learning in neural networks. © 1994 Kluwer Academic Publishers.
CITATION STYLE
Hancock, T. R., Golea, M., & Marchand, M. (1994). Learning nonoverlapping perceptron networks from examples and membership queries. Machine Learning, 16(3), 161–183. https://doi.org/10.1007/BF00993305
Mendeley helps you to discover research relevant for your work.