Using machine teaching to identify optimal training-set attacks on machine learners

246Citations
Citations of this article
195Readers
Mendeley users who have this article in their library.

Abstract

We investigate a problem at the intersection of machine learning and security: training-set attacks on machine learners. In such attacks an attacker contaminates the training data so that a specific learning algorithm would produce a model profitable to the attacker. Understanding training-set attacks is important as more intelligent agents (e.g. spam filters and robots) are equipped with learning capability and can potentially be hacked via data they receive from the environment. This paper identifies the optimal training-set attack on a broad family of machine learners. First we show that optimal training-set attack can be formulated as a bilevel optimization problem. Then we show that for machine learners with certain Karush-Kuhn-Tucker conditions we can solve the bilevel problem efficiently using gradient methods on an implicit function. As examples, we demonstrate optimal training-set attacks on Support Vector Machines, logistic regression, and linear regression with extensive experiments. Finally, we discuss potential defenses against such attacks.

Cite

CITATION STYLE

APA

Mei, S., & Zhu, X. (2015). Using machine teaching to identify optimal training-set attacks on machine learners. In Proceedings of the National Conference on Artificial Intelligence (Vol. 4, pp. 2871–2877). AI Access Foundation. https://doi.org/10.1609/aaai.v29i1.9569

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free