Genetic multivariate polynomials: An alternative tool to neural networks

4Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

One of the basic problems of applied mathematics is to find a synthetic expression (model) which captures the essence of a system given a (necessarily) finite sample which reflects selected characteristics. When the model considers several independent variables its mathematical treatment may become burdensome or even downright impossible from a practical standpoint, In this paper we explore the utilization of an efficient genetic algorithm to select the "best" subset of multivariate monomials out of a full polynomial of the form F(v 1,..., v n) = Σ i1=0g1...Σ ingnc i1...i nv 1i1...v ni n (where gi denotes the maximum desired degree for the i-th independent variable). This regression problem has been tackled with success using neural networks (NN). However, the "black box" characteristic of such models is frequently cited as a major drawback. We show that it is possible to find a polynomial model for an arbitrary set of data, From selected practical cases we argue that, despite the restrictions of a polynomial basis, our Genetic Multivariate Polynomials (GMP) compete with the NN approach without the mentioned limitation. We show how to treat constrained functions as unconstrained ones using GMPs. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Kuri-Morales, A. F., & Juárez-Almaraz, F. (2005). Genetic multivariate polynomials: An alternative tool to neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3773 LNCS, pp. 262–270). https://doi.org/10.1007/11578079_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free