Abstract
Feature selection plays an important role in machine learning in order to reduce model complexity and extract more meaningful information. The recent studies indicate that not only the generalization ability but also the security should be considered in selecting features in an adversarial environment in which a classifier may be misled by an adversary intentionally. However, since only the nearest legitimate sample to a malicious sample is considered, the existing adversarial filter feature selection method is sensitive to outlier samples and is not suitable for Boolean features. A distribution-based adversarial filter feature selection method is proposed against evasion attack in this study. Our method uses distribution-based measurements, such as Symmetric Uncertainty and Earth Mover's Distance, are used to quantify the generalization ability and security of a feature subset. The experiments suggest our proposed method outperforms existing methods in terms of robustness and stability in datasets with Boolean and real-valued features.
Cite
CITATION STYLE
Chan, P. P. K., Liang, Y. C., Zhang, F., & Yeung, D. S. (2021). Distribution-based Adversarial Filter Feature Selection against Evasion Attack. In Proceedings of the International Joint Conference on Neural Networks (Vol. 2021-July). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/IJCNN52387.2021.9533763
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.