Modern Autonomous Vehicles (AVs) leverage road context information collected through sensors (e.g., LiDAR, radar, and camera) to support the automated driving experience. Once such information is collected, a neural network model predicts subsequent actions that the AV executes. However, state-of-the-art research findings have shown the possibility that an attacker can compromise the accuracy of the neural network model in predicting tasks. Indeed, mispredicting the subsequent actions can cause harmful consequences to the road user's safety. In this paper, we analyze the disruptive impact of adversarial attacks on road context-aware Intrusion Detection System (RAIDS) and propose a solution to mitigate such effects. To this end, we implement five state-of-the-art evasion attacks on vehicle camera images the IDS uses to monitor internal vehicular traffic. Our experimental results underline how this type of attack can reduce the attack detection accuracy of such detectors down to 2.83%. To combat such adversarial attacks, we investigate different countermeasure and propose PAID, a robust context-aware IDS that leverage feature squeezing and GPS to detect intrusions. We evaluate PAID's capability in identifying such attacks, and implementation results confirm that PAID achieves a detection accuracy of up to 93.9%, outperforming RAIDS's performance.
CITATION STYLE
Teng, K. Z., Limbasiya, T., Turrin, F., Aung, Y. L., Chattopadhyay, S., Zhou, J., & Conti, M. (2023). PAID: Perturbed Image Attacks Analysis and Intrusion Detection Mechanism for Autonomous Driving Systems. In CPSS 2023 - Proceedings of the 9th ACM ASIA Conference on Cyber-Physical System Security Workshop (pp. 3–13). Association for Computing Machinery, Inc. https://doi.org/10.1145/3592538.3594273
Mendeley helps you to discover research relevant for your work.