Limits of computational differential privacy in the client/server setting

11Citations
Citations of this article
57Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Differential privacy is a well established definition guaranteeing that queries to a database do not reveal "too much" information about specific individuals who have contributed to the database. The standard definition of differential privacy is information theoretic in nature, but it is natural to consider computational relaxations and to explore what can be achieved with respect to such notions. Mironov et al. (Crypto 2009) and McGregor et al. (FOCS 2010) recently introduced and studied several variants of computational differential privacy, and show that in the two-party setting (where data is split between two parties) these relaxations can offer significant advantages. Left open by prior work was the extent, if any, to which computational differential privacy can help in the usual client/server setting where the entire database resides at the server, and the client poses queries on this data. We show, for queries with output in ℝn (for constant n) and with respect to a large class of utilities, that any computationally private mechanism can be converted to a statistically private mechanism that is equally efficient and achieves roughly the same utility. © 2011 International Association for Cryptologic Research.

Cite

CITATION STYLE

APA

Groce, A., Katz, J., & Yerukhimovich, A. (2011). Limits of computational differential privacy in the client/server setting. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6597 LNCS, pp. 417–431). Springer Verlag. https://doi.org/10.1007/978-3-642-19571-6_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free