Connected and Autonomous Vehicles (CAVs) offer improved efficiency and convenience through innovative embedded devices. However, the development of these technologies has often neglected security measures, leading to vulnerabilities that can be exploited by hackers. Conceding that a CAV system is compromised, it can result in unsafe driving conditions and pose a threat to human safety. Prioritizing both security measures and functional enhancements on development of CAVs is essential to ensure their safety and reliability and enhance consumer trust in the technology. CAVs use artificial intelligence to control their driving behavior, which can be easily influenced by small changes in the model that can significantly impact and potentially mislead the system. To address this issue, this study proposed a defense mechanism that uses an autoencoder and a compressive memory module to store normal image features and prevent unexpected generalization on adversarial inputs. The proposed solution was studied against Hijacking, Vanishing, Fabrication, and Mislabeling attacks using FGSM and AdvGAN against the Nvidia Dave-2 driving model, and was found to be effective, with success rates of (Formula presented.) and (Formula presented.) in a Whitebox setup, and (Formula presented.) and (Formula presented.) in a Blackbox setup for FGSM and AdvGAN, respectively. That improves the results by (Formula presented.) in Whitebox setup (Formula presented.) in Blackbox setup.
CITATION STYLE
Shibly, K. H., Hossain, M. D., Inoue, H., Taenaka, Y., & Kadobayashi, Y. (2023). Towards Autonomous Driving Model Resistant to Adversarial Attack. Applied Artificial Intelligence, 37(1). https://doi.org/10.1080/08839514.2023.2193461
Mendeley helps you to discover research relevant for your work.