Data poisoning against differentially-private learners: Attacks and defenses

97Citations
Citations of this article
110Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Data poisoning attacks aim to manipulate the model produced by a learning algorithm by adversarially modifying the training set. We consider differential privacy as a defensive measure against this type of attack. We show that private learners are resistant to data poisoning attacks when the adversary is only able to poison a small number of items. However, this protection degrades as the adversary is allowed to poison more data. We emprically evaluate this protection by designing attack algorithms targeting objective and output perturbation learners, two standard approaches to differentially-private machine learning. Experiments show that our methods are effective when the attacker is allowed to poison sufficiently many training items.

Cite

CITATION STYLE

APA

Ma, Y., Zhu, X., & Hsu, J. (2019). Data poisoning against differentially-private learners: Attacks and defenses. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 4732–4738). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/657

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free