Novel and efficient randomized algorithms for feature selection

39Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.

Abstract

Feature selection is a crucial problem in efficient machine learning, and it also greatly contributes to the explainability of machine-driven decisions. Methods, like decision trees and Least Absolute Shrinkage and Selection Operator (LASSO), can select features during training. However, these embedded approaches can only be applied to a small subset of machine learning models. Wrapper based methods can select features independently from machine learning models but they often suffer from a high computational cost. To enhance their efficiency, many randomized algorithms have been designed. In this paper, we propose automatic breadth searching and attention searching adjustment approaches to further speedup randomized wrapper based feature selection. We conduct theoretical computational complexity analysis and further explain our algorithms' generic parallelizability. We conduct experiments on both synthetic and real datasets with different machine learning base models. Results show that, compared with existing approaches, our proposed techniques can locate a more meaningful set of features with a high efficiency.

Cite

CITATION STYLE

APA

Wang, Z., Xiao, X., & Rajasekaran, S. (2020). Novel and efficient randomized algorithms for feature selection. Big Data Mining and Analytics, 3(3), 208–224. https://doi.org/10.26599/BDMA.2020.9020005

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free