Website Fingerprinting (WF) attacks raise major concerns about users’ privacy. They employ Machine Learning (ML) techniques to allow a local passive adversary to uncover the Web browsing behavior of a user, even if she browses through an encrypted tunnel (e.g. Tor, VPN). Numerous defenses have been proposed in the past; however, it is typically difficult to have formal guarantees on their security, which is most often evaluated empirically against state-of-the-art attacks. In this paper, we present a practical method to derive security bounds for any WF defense, where the bounds depend on a chosen feature set. This result derives from reducing WF attacks to an ML classification task, where we can determine the smallest achievable error (the Bayes error). Such error can be estimated in practice, and is a lower bound for a WF adversary, for any classification algorithm he may use. Our work has two main consequences: i) it allows determining the security of WF defenses, in a black-box manner, with respect to the state-of-the-art feature set and ii) it favors shifting the focus of future WF research to identifying optimal feature sets. The generality of this approach further suggests that the method could be used to define security bounds for other ML-based attacks.
CITATION STYLE
Cherubin, G. (2017). Bayes, not Naïve: Security Bounds on Website Fingerprinting Defenses. Proceedings on Privacy Enhancing Technologies, 2017(4), 215–231. https://doi.org/10.1515/popets-2017-0046
Mendeley helps you to discover research relevant for your work.