A Privacy-Protection Model for Patients

14Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The collection and analysis of patient cases can effectively help researchers to extract case feature and to achieve the objectives of precision medicine, but it may cause privacy issues for patients. Although encryption is a good way to protect privacy, it is not conducive to the sharing and analysis of medical cases. In order to address this problem, this paper proposes a federated learning verification model, which combines blockchain technology, homomorphic encryption, and federated learning technology to effectively solve privacy issues. Moreover, we present a FL-EM-GMM Algorithm (Federated Learning Expectation Maximization Gaussian Mixture Model Algorithm), which can make model training without data exchange for protecting patient's privacy. Finally, we conducted experiments on the federated task of datasets from two organizations in our model system, where the data has the same sample ID with different subset features, and this system is capable of handling privacy and security issues. The results show that the model was trained by our system with better usability, security, and higher efficiency, which is compared with the model trained by traditional machine learning methods.

Cite

CITATION STYLE

APA

Cheng, W., Ou, W., Yin, X., Yan, W., Liu, D., & Liu, C. (2020). A Privacy-Protection Model for Patients. Security and Communication Networks, 2020. https://doi.org/10.1155/2020/6647562

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free