Differentially private online active learning with applications to anomaly detection

16Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In settings where data instances are generated sequentially or in streaming fashion, online learning algorithms can learn predictors using incremental training algorithms such as stochastic gradient descent. In some security applications such as training anomaly detectors, the data streams may consist of private information or transactions and the output of the learning algorithms may reveal information about the training data. Differential privacy is a framework for quantifying the privacy risk in such settings. This paper proposes two differentially private strategies to mitigate privacy risk when training a classifier for anomaly detection in an online setting. The first is to use a randomized active learning heuristic to screen out uninformative data points in the stream. The second is to use mini-batching to improve classifier performance. Experimental results show how these two strategies can trade off privacy, label complexity, and generalization performance.

Cite

CITATION STYLE

APA

Ghassemi, M., Sarwate, A. D., & Wright, R. N. (2016). Differentially private online active learning with applications to anomaly detection. In AISec 2016 - Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security, co-located with CCS 2016 (pp. 117–128). Association for Computing Machinery, Inc. https://doi.org/10.1145/2996758.2996766

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free