Learning nonoverlapping perceptron networks from examples and membership queries

0Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We investigate, within the PAC learning model, the problem of learning nonoverlapping perceptron networks (also known as read-once formulas over a weighted threshold basis). These are loop-free neural nets in which each node has only one outgoing weight. We give a polynomial time algorithm that PAC learns any nonverlapping perceptron network using examples and membership queries. The algorithm is able to identify both the architecture and the weight values necessary to represent the function to be learned. Our results shed some light on the effect of the overlap on the complexity of learning in neural networks. © 1994 Kluwer Academic Publishers.

Cite

CITATION STYLE

APA

Hancock, T. R., Golea, M., & Marchand, M. (1994). Learning nonoverlapping perceptron networks from examples and membership queries. Machine Learning, 16(3), 161–183. https://doi.org/10.1007/BF00993305

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free