Active learning has played an important role in many areas because it can reduce human efforts by just selecting most informative instances for training. Nevertheless, active learning is vulnerable in adversarial environments, including intrusion detection or spam filtering. The purpose of this paper was to reveal how active learning can be attacked in such environments. In this paper, three contributions were made: first, we analyzed the sampling vulnerability of active learning; second, we presented a game framework of attack against active learning; third, two sampling attack methods were proposed, including the adding attack and the deleting attack. Experimental results showed that the two proposed sampling attacks degraded sampling efficiency of naive-bayes active learner. © 2012 Springer-Verlag.
CITATION STYLE
Zhao, W., Long, J., Yin, J., Cai, Z., & Xia, G. (2012). Sampling attack against active learning in adversarial environment. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7647 LNAI, pp. 222–223). https://doi.org/10.1007/978-3-642-34620-0_21
Mendeley helps you to discover research relevant for your work.