Growing methods for constructing Recursive Deterministic Perceptron neural networks and knowledge extraction

10Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The Recursive Deterministic Perception (RDP) feedforward multilayer neural network is a generalization of the single layer perceptron topology (SLPT). This new model is capable of solving any two-class classification problem, as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable (LS) sets (two subsets X and Y of ℝd are said to be linearly separable if there exists a hyperplane such that the elements of X and Y lie on the two opposite sides of ℝd delimited by this hyperplane). For all classification problems, the construction of an RDP is done automatically and thus, the convergence to a solution is always guaranteed. We propose three growing methods for constructing an RDP neural network. These methods perform, respectively, batch, incremental, and modular learning. We also show how the knowledge embedded in an RDP neural network model can always be expressed, transparently, as a finite union of open polytopes. The combination of the decision region of RDP models, by using boolean operations, is also discussed. © 1998 Elsevier Science B.V. All rights reserved.

Cite

CITATION STYLE

APA

Tajine, M., & Elizondo, D. (1998). Growing methods for constructing Recursive Deterministic Perceptron neural networks and knowledge extraction. Artificial Intelligence, 102(2), 295–322. https://doi.org/10.1016/S0004-3702(98)00057-5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free