Dual-filtering (DF) schemes for learning systems to prevent adversarial attacks

3Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Defenses against adversarial attacks are essential to ensure the reliability of machine-learning models as their applications are expanding in different domains. Existing ML defense techniques have several limitations in practical use. We proposed a trustworthy framework that employs an adaptive strategy to inspect both inputs and decisions. In particular, data streams are examined by a series of diverse filters before sending to the learning system and then crossed checked its output through anomaly (outlier) detectors before making the final decision. Experimental results (using benchmark data-sets) demonstrated that our dual-filtering strategy could mitigate adaptive or advanced adversarial manipulations for wide-range of ML attacks with higher accuracy. Moreover, the output decision boundary inspection with a classification technique automatically affirms the reliability and increases the trustworthiness of any ML-based decision support system. Unlike other defense techniques, our dual-filtering strategy does not require adversarial sample generation and updating the decision boundary for detection, makes the ML defense robust to adaptive attacks.

References Powered by Scopus

LOF: Identifying density-based local outliers

5808Citations
N/AReaders
Get full text

Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

2219Citations
N/AReaders
Get full text

Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning

1944Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Mitigating Cyber Anomalies in Virtual Power Plants Using Artificial-Neural-Network-Based Secondary Control with a Federated Learning-Trust Adaptation

4Citations
N/AReaders
Get full text

DIB-UAP: enhancing the transferability of universal adversarial perturbation via deep information bottleneck

0Citations
N/AReaders
Get full text

Federated Learning with Authenticated Clients

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Dasgupta, D., & Gupta, K. D. (2023). Dual-filtering (DF) schemes for learning systems to prevent adversarial attacks. Complex and Intelligent Systems, 9(4), 3717–3738. https://doi.org/10.1007/s40747-022-00649-1

Readers over time

‘22‘23‘24‘25036912

Readers' Discipline

Tooltip

Computer Science 1

100%

Save time finding and organizing research with Mendeley

Sign up for free
0