Robustness of Deep Learning Models for Vision Tasks

8Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

In recent years, artificial intelligence technologies in vision tasks have gradually begun to be applied to the physical world, proving they are vulnerable to adversarial attacks. Thus, the importance of improving robustness against adversarial attacks has emerged as an urgent issue in vision tasks. This article aims to provide a historical summary of the evolution of adversarial attacks and defense methods on CNN-based models and also introduces studies focusing on brain-inspired models that mimic the visual cortex, which is resistant to adversarial attacks. As the origination of CNN models was in the application of physiological findings related to the visual cortex of the time, new physiological studies related to the visual cortex provide an opportunity to create more robust models against adversarial attacks. The authors hope this review will promote interest and progress in artificially intelligent security by improving the robustness of deep learning models for vision tasks.

Cite

CITATION STYLE

APA

Lee, Y., & Kim, J. (2023, April 1). Robustness of Deep Learning Models for Vision Tasks. Applied Sciences (Switzerland). MDPI. https://doi.org/10.3390/app13074422

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free