Measuring Success of Heterogeneous Ensemble Filter Feature Selection Models

  • Noureldien N
  • et al.
N/ACitations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One problem in utilizing ensemble feature selection models is machine learning is the fact that there is no guarantee that an ensemble model will improve machine learning classification performance. This implies that different ensemble models have different success probability, i.e. have different probability in improving the performance of machine learning. This paper introduces the concept of success probability for heterogeneous ensemble models and stated the definitions, notations, and algorithms necessary to the mathematical formulation and computation of the success probability. To show how the theory applied, we create an ensemble filter feature selection model that uses four filter feature selection algorithms (Correlation, Gain Ratio, Info Gain, and One R) as base filters and the Max as a combination method. The experimental results showed that the success probability of the developed ensemble filter model using a set of 9 machine learning algorithms is found to be 0.58.

Cite

CITATION STYLE

APA

Noureldien, N. A., & Mohammed, E. A. (2020). Measuring Success of Heterogeneous Ensemble Filter Feature Selection Models. International Journal of Recent Technology and Engineering (IJRTE), 8(6), 1153–1158. https://doi.org/10.35940/ijrte.e4993.038620

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free