Intrusion Attacks on Deep Learning Frameworks Employed in Self-Driving Vehicles

  • Fatima D
  • et al.
N/ACitations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep convolutional networks have proven practical for autonomous vehicle applications as deep CNN technology has advanced. There has been a growing vogue for using end-to-end computational methods for the mechanization of vehicular activities. Preliminary studies, though, have demonstrated that deep learning network classifiers are sensitive to adversarial approaches. But, the impact of adversarial strategies on regression problems is not sufficiently known. We propose two white-box direct security breaches targeting progressive self-driving vehicles in this research. A prediction model is used in the navigation mechanism, which receives a picture as feed and returns a steering angle. By altering the input image, we may influence the actions of the automated driving unit. Two different attacks may be launched in practice on CPUs with no need for GPUs. The effectiveness of the threats is demonstrated by trials carried out in Udacity.

Cite

CITATION STYLE

APA

Fatima, Dr. S. K., & Fatima, Dr. S. G. (2023). Intrusion Attacks on Deep Learning Frameworks Employed in Self-Driving Vehicles. International Journal of Recent Technology and Engineering (IJRTE), 11(6), 84–90. https://doi.org/10.35940/ijrte.f7482.0311623

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free