Privacy-preserving sequential pattern release

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We investigate situations where releasing frequent sequential patterns can compromise individual's privacy. We propose two concrete objectives for privacy protection: k-anonymity and a-dissociation. The first addresses the problem of inferring patterns with very low support, say, in [1, k). These inferred patterns can become quasi-identifiers in linking attacks. We show that, for all but one definition of support, it is impossible to reliably infer support values for patterns with two or more negative items (items which do not occur in a pattern) solely based on frequent sequential patterns. For the remaining definition, we formulate privacy inference channels, a-dissociation handles the problem of high certainty of inferring sensitive attribute values. In order to remove privacy threats w.r.t. the two objectives, we show that we only need to examine pairs of sequential patterns with length difference of 1. We then establish a Privacy Inference Channels Sanitisation (PICS) algorithm. It can, as illustrated by experiments, reduce the privacy disclosure risk carried by frequent sequential patterns with a small computation overhead. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Jin, H., Chen, J., He, H., & O’Keefe, C. M. (2007). Privacy-preserving sequential pattern release. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4426 LNAI, pp. 547–554). Springer Verlag. https://doi.org/10.1007/978-3-540-71701-0_57

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free