On the Statistical Detection of Adversarial Instances over Encrypted Data

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Adversarial instances are malicious inputs designed to fool machine learning models. In particular, motivated and sophisticated attackers intentionally design adversarial instances to evade classifiers which have been trained to detect security violation, such as malware detection. While the existing approaches provide effective solutions in detecting and defending adversarial samples, they fail to detect them when they are encrypted. In this study, a novel framework is proposed which employs statistical test to detect adversarial instances, when data under analysis are encrypted. An experimental evaluation of our approach shows its practical feasibility in terms of computation cost.

Cite

CITATION STYLE

APA

Sheikhalishahi, M., Nateghizad, M., Martinelli, F., Erkin, Z., & Loog, M. (2019). On the Statistical Detection of Adversarial Instances over Encrypted Data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11738 LNCS, pp. 71–88). Springer. https://doi.org/10.1007/978-3-030-31511-5_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free