Private algorithms for the protected in social network search

22Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.

Abstract

Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets.

Cite

CITATION STYLE

APA

Kearns, M., Roth, A., Wu, Z. S., & Yaroslavtsev, G. (2016). Private algorithms for the protected in social network search. Proceedings of the National Academy of Sciences of the United States of America, 113(4), 913–918. https://doi.org/10.1073/pnas.1510612113

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free