Protecting patient privacy by quantifiable control of disclosures in disseminated databases

  • Ohno-Machado L
  • Silveira P
  • Vinterbo S
  • 38


    Mendeley users who have this article in their library.
  • 38


    Citations of this article.


One of the fundamental rights of patients is to have their privacy protected by health care organizations, so that information that can be used to identify a particular individual is not used to reveal sensitive patient data such as diagnoses, reasons for ordering tests, test results, etc. A common practice is to remove sensitive data from databases that are disseminated to the public, but this can make the disseminated database useless for important public health purposes. If the degree of anonymity of a disseminated data set could be measured, it would be possible to design algorithms that can assure that the desired level of confidentiality is achieved. Privacy protection in disseminated databases can be facilitated by the use of special ambiguation algorithms. Most of these algorithms are aimed at making one individual indistinguishable from one or more of his peers. However, even in databases considered "anonymous", it may still be possible to obtain sensitive information about some individuals or groups of individuals with the use of pattern recognition algorithms. In this article, we study the problem of determining the degree of ambiguation in disseminated databases and discuss its implications in the development and testing of "anonymization" algorithms. © 2004 Elsevier Ireland Ltd. All rights reserved.

Author-supplied keywords

  • Anonymization
  • Confidentiality
  • Consumer informatics
  • Patient privacy
  • Predictive modeling

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document


Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free