Filter Method Ensemble with Neural Networks

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The main concept behind designing a multiple classifier system is to combine a number of classifiers such that the resulting system succeeds to topple the individual classifiers by pooling together the decisions of all classifiers. Uniting relatively simple pattern recognition models with limited performance is commonly found in the literature. It performs better when each learner be trained well, but different learners have different working principles which adds diversity in the ensemble. In this paper, we first select three optimal subsets of features using three different filter methods namely Mutual Information (MI), Chi-square, and Anova F-Test. Then with the selected features we build three learning models using Multi-layer Perceptron (MLP) based classifier. Class membership values provided by these three classifiers for each sample are concatenated which is then fed to next MLP based classifier. Experimentation performed on five UCI Machine Learning Repository, namely Arrhythmia, Ionosphere, Hill-Valley, Waveform, Horse Colic shows the effectiveness of the proposed ensemble model.

Cite

CITATION STYLE

APA

Chakraborty, A., De, R., Chatterjee, A., Schwenker, F., & Sarkar, R. (2019). Filter Method Ensemble with Neural Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11728 LNCS, pp. 755–765). Springer Verlag. https://doi.org/10.1007/978-3-030-30484-3_59

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free