Feature extraction is the first step in building real-life classification engines—it aims at elaborating features to characterize objects that are to be labeled by a trained model. Time-consuming feature extraction requires domain expertise to effectively design features. Deep neural networks (DNNs) appeared as a remedy in this context—their shallow layers perform representation learning, being an automated discovery of various-level features that robustly represent objects. However, the representations that are being learnt are still extremely difficult to interpret, and DNNs are prone to memorizing small datasets. In this paper, we introduce evolvable deep features (EDFs)—a DNN is used to extract automatic features that undergo genetic feature selection. Such evolved features are fed into a supervised learner. The experiments, backed up with statistical tests, performed on multi- and binary-class sets showed that our approach automatically learns object representations, greatly reduces the number of features without deteriorating the performance of trained models, and can even boost their classification performance.
CITATION STYLE
Nalepa, J., Mrukwa, G., & Kawulok, M. (2018). Evolvable Deep Features. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10784 LNCS, pp. 497–505). Springer Verlag. https://doi.org/10.1007/978-3-319-77538-8_34
Mendeley helps you to discover research relevant for your work.