A2SC: Adversarial Attacks on Subspace Clustering

6Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Many studies demonstrate that supervised learning techniques are vulnerable to adversarial examples. However, adversarial threats in unsupervised learning have not drawn sufficient scholarly attention. In this article, we formally address the unexplored adversarial attacks in the equally important unsupervised clustering field and propose the concept of the adversarial set and adversarial set attack for clustering. To illustrate the basic idea, we design a novel adversarial space-mapping attack algorithm to confuse subspace clustering, one of the mainstream branches of unsupervised clustering. It maps a sample into one wrong class by moving it towards the closest point on the linear subspace of the target class, that is, along the normal of the closest point. This simple single-step algorithm has the power to craft the adversarial set where the image samples can be wrongly clustered, even into the targeted labels. Empirical results on different image datasets verify the effectiveness and superiority of our algorithm. We further show that deep supervised learning algorithms (such as VGG and ResNet) are also vulnerable to our crafted adversarial set, which illustrates the good cross-task transferability of the adversarial set.

Cite

CITATION STYLE

APA

Xu, Y., Wei, X., Dai, P., & Cao, X. (2023). A2SC: Adversarial Attacks on Subspace Clustering. ACM Transactions on Multimedia Computing, Communications and Applications, 19(6). https://doi.org/10.1145/3587097

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free