An Approach to Generation Triggers for Parrying Backdoor in Neural Networks

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The lack of transparency in the results of the work of artificial neural networks makes them vulnerable to backdoor-attacks, which leads to unexpected results and loss of their effectiveness. The backdoor can remain hidden indefinitely until activated by modified data input, and pose an information security threat to all applications, but especially those associated with critical information infrastructure objects. The article presents an approach to detect and neutralize the consequences of backdoor-attacks in neural networks, based on the identification of a backdoor and possible triggers. Taking into account the peculiarities of training artificial neural networks, the authors present the result of research aimed at determining 1) the presence of a trigger that will give incorrect results of the neural network, 2) the characteristics of the trigger, and 3) actions to neutralize the possibility of trigger activation. The novelty of the obtained results lies in the development of a new approach for detecting bugs in neural networks based on synthesizing triggers, including 1) an algorithm for determining the target class for an attack, 2) a model correction algorithm based on neuron reduction, and 3) a model correction algorithm based on learning cancellation. The authors also conducted experiments to parry this threat using the developed approach and evaluated the effectiveness of using neuron pruning and canceling neural network training. The work is winner of nationwide contest for most innovative projects Code Artificial Intelligence (214635) and got funds from The Foundation for Assistance to Small Innovative Enterprises (FASIE) (Module for protecting neural networks from computer backdoor-attacks (PROTECA) www.proteca.tech ).

Cite

CITATION STYLE

APA

Artem, M. (2023). An Approach to Generation Triggers for Parrying Backdoor in Neural Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13539 LNAI, pp. 304–314). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-19907-3_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free