Randomizing SVM against adversarial attacks under uncertainty

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Robust machine learning algorithms have been widely studied in adversarial environments where the adversary maliciously manipulates data samples to evade security systems. In this paper, we propose randomized SVMs against generalized adversarial attacks under uncertainty, through learning a classifier distribution rather than a single classifier in traditional robust SVMs. The randomized SVMs have advantages on better resistance against attacks while preserving high accuracy of classification, especially for non-separable cases. The experimental results demonstrate the effectiveness of our proposed models on defending against various attacks, including aggressive attacks with uncertainty.

Cite

CITATION STYLE

APA

Chen, Y., Wang, W., & Zhang, X. (2018). Randomizing SVM against adversarial attacks under uncertainty. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10939 LNAI, pp. 556–568). Springer Verlag. https://doi.org/10.1007/978-3-319-93040-4_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free