Parallel feature selection for regularized least-squares

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper introduces a parallel version of the machine learning based feature selection algorithm known as greedy regularized least-squares (RLS). The aim of such machine learning methods is to develop accurate predictive models on complex datasets. Greedy RLS is an efficient implementation of the greedy forward feature selection procedure using regularized least-squares, capable of efficiently selecting the most predictive features from large datasets. It has previously been shown, through the use of matrix algebra shortcuts, to perform feature selection in only a fraction of the time required by traditional implementations. In this paper, the algorithm is adapted to allow for efficient parallel-based feature selection in order to scale the method to run on modern clusters. To demonstrate its effectiveness in practice, we implemented it on a sample genome-wide association study, as well as a number of other high-dimensional datasets, scaling the method to up to 128 cores. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Okser, S., Airola, A., Aittokallio, T., Salakoski, T., & Pahikkala, T. (2013). Parallel feature selection for regularized least-squares. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7782 LNCS, pp. 280–294). https://doi.org/10.1007/978-3-642-36803-5_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free