Learning nonoverlapping perceptron networks from examples and membership queries

0Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We investigate, within the PAC learning model, the problem of learning nonoverlapping perceptron networks (also known as read-once formulas over a weighted threshold basis). These are loop-free neural nets in which each node has only one outgoing weight. We give a polynomial time algorithm that PAC learns any nonverlapping perceptron network using examples and membership queries. The algorithm is able to identify both the architecture and the weight values necessary to represent the function to be learned. Our results shed some light on the effect of the overlap on the complexity of learning in neural networks. © 1994 Kluwer Academic Publishers.

References Powered by Scopus

A theory of the learnable

3706Citations
N/AReaders
Get full text

A new polynomial-time algorithm for linear programming

2805Citations
N/AReaders
Get full text

Learnability and the Vapnik-Chervonenkis Dimension

1323Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Hancock, T. R., Golea, M., & Marchand, M. (1994). Learning nonoverlapping perceptron networks from examples and membership queries. Machine Learning, 16(3), 161–183. https://doi.org/10.1007/BF00993305

Readers over time

‘09‘10‘11‘12‘13‘14‘15‘16‘17‘18‘1902468

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 14

64%

Professor / Associate Prof. 4

18%

Researcher 3

14%

Lecturer / Post doc 1

5%

Readers' Discipline

Tooltip

Computer Science 19

83%

Physics and Astronomy 2

9%

Environmental Science 1

4%

Engineering 1

4%

Save time finding and organizing research with Mendeley

Sign up for free
0