Local Differential Privacy for Bayesian Optimization

11Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Motivated by the increasing concern about privacy in nowadays data-intensive online learning systems, we consider a black-box optimization in the nonparametric Gaussian process setting with local differential privacy (LDP) guarantee. Specifically, the rewards from each user are further corrupted to protect privacy and the learner only has access to the corrupted rewards to minimize the regret. We first derive the regret lower bounds for any LDP mechanism and any learning algorithm. Then, we present three almost optimal algorithms based on the GP-UCB framework and Laplace DP mechanism. In this process, we also propose a new Bayesian optimization (BO) method (called MoMA-GP-UCB) based on median-of-means techniques and kernel approximations, which complements previous BO algorithms for heavy-tailed payoffs with a reduced complexity. Further, empirical comparisons of different algorithms on both synthetic and real-world datasets highlight the superior performance of MoMA-GP-UCB in both private and non-private scenarios.

Cite

CITATION STYLE

APA

Zhou, X., & Tan, J. (2021). Local Differential Privacy for Bayesian Optimization. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 12B, pp. 11152–11159). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i12.17330

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free