Neural networks have proven to be a good alternative in application fields such as healthcare, time-series forecasting and artificial vision, among others, for tasks like regression or classification. Their potential has been particularly remarkable in unstructured data, but recently developed architectures or their ensemble with other classical methods have produced competitive results in structured data. Feature selection has several beneficial properties: improve efficacy, performance, problem understanding and data recollection time. However, as new data sources become available and new features are generated using feature engineering techniques, more computational resources are required for feature selection methods. Feature selection takes an exorbitant amount of time in datasets with numerous features, making it impossible to use or achieving suboptimal selections that do not reflect the underlying behavior of the problem. We propose a nonparametric neural network layer which provides all the benefits of feature selection while requiring few changes to the architecture. Our method adds a novel layer at the beginning of the neural network, which removes the influence of features during training, adding inherent interpretability to the model without extra parameterization. In contrast to other feature selection methods, we propose an efficient and model-aware method to select the features with no need to train the model several times. We compared our method with a variety of popular feature selection strategies and datasets, showing remarkable results
CITATION STYLE
Jiménez-Navarro, M. J., Martínez-Ballesteros, M., Sousa Brito, I. S., Martínez-Álvarez, F., & Asencio-Cortés, G. (2023). Feature-Aware Drop Layer (FADL): A Nonparametric Neural Network Layer for Feature Selection. In Lecture Notes in Networks and Systems (Vol. 531 LNNS, pp. 557–566). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-18050-7_54
Mendeley helps you to discover research relevant for your work.