Sampling attack against active learning in adversarial environment

11Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Active learning has played an important role in many areas because it can reduce human efforts by just selecting most informative instances for training. Nevertheless, active learning is vulnerable in adversarial environments, including intrusion detection or spam filtering. The purpose of this paper was to reveal how active learning can be attacked in such environments. In this paper, three contributions were made: first, we analyzed the sampling vulnerability of active learning; second, we presented a game framework of attack against active learning; third, two sampling attack methods were proposed, including the adding attack and the deleting attack. Experimental results showed that the two proposed sampling attacks degraded sampling efficiency of naive-bayes active learner. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Zhao, W., Long, J., Yin, J., Cai, Z., & Xia, G. (2012). Sampling attack against active learning in adversarial environment. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7647 LNAI, pp. 222–223). https://doi.org/10.1007/978-3-642-34620-0_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free