A spatial EA framework for parallelizing machine learning methods

5Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The scalability of machine learning (ML) algorithms has become increasingly important due to the ever increasing size of datasets and increasing complexity of the models induced. Standard approaches for dealing with this issue generally involve developing parallel and distributed versions of the ML algorithms and/or reducing the dataset sizes via sampling techniques. In this paper we describe an alternative approach that combines features of spatially-structured evolutionary algorithms (SSEAs) with the well-known machine learning techniques of ensemble learning and boosting. The result is a powerful and robust framework for parallelizing ML methods in a way that does not require changes to the ML methods. We first describe the framework and illustrate its behavior on a simple synthetic problem, and then evaluate its scalability and robustness using several different ML methods on a set of benchmark problems from the UC Irvine ML database. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Kamath, U., Kaers, J., Shehu, A., & De Jong, K. A. (2012). A spatial EA framework for parallelizing machine learning methods. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7491 LNCS, pp. 206–215). https://doi.org/10.1007/978-3-642-32937-1_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free