Does data-model co-evolution improve generalization performance of evolving learners?

8Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In co-evolution as defined by Hillis, data which defines a problem co-evolves simultaneously with a population of models searching for solutions. Herein, one way in which this could work is explored theoretically. Co-evolution could lead to improvement by evolving the data to a set which if a model is trained on it, it will generalize better than a model trained on the initial data. It is argued here that the data will not necessarily co-evolve to such a set in general. It is then shown in an extremely simple toy example that if there is too much optimization per generation, the system oscillates between very low fitness solutions, and will perform much worse than a system with no co-evolution. If the learning parameters are scaled appropriately, the data set does evolve to one which leads to better generalization performance. The improvement can be arbitrarily better in the large population size limit, but strong finite-population effects limit this, can be achieved.

Cite

CITATION STYLE

APA

Shapiro, J. L. (1998). Does data-model co-evolution improve generalization performance of evolving learners? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1498 LNCS, pp. 540–549). Springer Verlag. https://doi.org/10.1007/bfb0056896

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free