Multi-class triplet loss with Gaussian noise for adversarial robustness

1Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Deep Neural Networks (DNNs) classifiers performance degrades under adversarial attacks, such attacks are indistinguishably perturbed relative to the original data. Providing robustness to adversarial attacks is an important challenge in DNN training, which has led to extensive research. In this paper, we harden DNN classifiers under the adversarial attacks by regularizing their deep internal representation space with Multi-class Triplet regularization method. This method enables DNN classifier to learn a feature representation that detects similarities between adversarial and clean images and brings similar images close to their original class and pushes dissimilar images away from their false classes. This training process with our Multi-class Triplet regularization method in combination with Gaussian noise injection proves to be more robust in detecting adversarial attacks exceeding that of adversarial training on strong iterative attacks.

Cite

CITATION STYLE

APA

Appiah, B., Baagyere, E. Y., Owusu-Agyemang, K., Qin, Z., & Abdullah, M. A. (2020). Multi-class triplet loss with Gaussian noise for adversarial robustness. IEEE Access, 8, 171664–171671. https://doi.org/10.1109/ACCESS.2020.3024244

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free