Objective: Privacy-preserving data publishing addresses the problem of disclosing sensitive data when mining for useful information. Among existing privacy models, ε-differential privacy provides one of the strongest privacy guarantees and makes no assumptions about an adversary's background knowledge. All existing solutions that ensure e-differential privacy handle the problem of disclosing relational and set-valued data in a privacypreserving manner separately. In this paper, we propose an algorithm that considers both relational and set-valued data in differentially private disclosure of healthcare data. Methods: The proposed approach makes a simple yet fundamental switch in differentially private algorithm design: instead of listing all possible records (ie, a contingency table) for noise addition, records are generalized before noise addition. The algorithm first generalizes the raw data in a probabilistic way, and then adds noise to guarantee e-differential privacy. Results: We showed that the disclosed data could be used effectively to build a decision tree induction classifier. Experimental results demonstrated that the proposed algorithm is scalable and performs better than existing solutions for classification analysis. Limitation The resulting utility may degrade when the output domain size is very large, making it potentially inappropriate to generate synthetic data for large health databases. Conclusions: Unlike existing techniques, the proposed algorithm allows the disclosure of health data containing both relational and set-valued data in a differentially private manner, and can retain essential information for discriminative analysis.
CITATION STYLE
Mohammed, N., Jiang, X., Chen, R., Fung, B. C. M., & Ohno-Machado, L. (2013). Privacy-preserving heterogeneous health data sharing. Journal of the American Medical Informatics Association, 20(3), 462–469. https://doi.org/10.1136/amiajnl-2012-001027
Mendeley helps you to discover research relevant for your work.