Vulnerability of Clean-Label Poisoning Attack for Object Detection in Maritime Autonomous Surface Ships

3Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

Artificial intelligence (AI) will play an important role in realizing maritime autonomous surface ships (MASSs). However, as a double-edged sword, this new technology brings forth new threats. The purpose of this study is to raise awareness among stakeholders regarding the potential security threats posed by AI in MASSs. To achieve this, we propose a hypothetical attack scenario in which a clean-label poisoning attack was executed on an object detection model, which resulted in boats being misclassified as ferries, thus preventing the detection of pirates approaching a boat. We used the poison frog algorithm to generate poisoning instances, and trained a YOLOv5 model with both clean and poisoned data. Despite the high accuracy of the model, it misclassified boats as ferries owing to the poisoning of the target instance. Although the experiment was conducted under limited conditions, we confirmed vulnerabilities in the object detection algorithm. This misclassification could lead to inaccurate AI decision making and accidents. The hypothetical scenario proposed in this study emphasizes the vulnerability of object detection models to clean-label poisoning attacks, and the need for mitigation strategies against security threats posed by AI in the maritime industry.

Cite

CITATION STYLE

APA

Lee, C., & Lee, S. (2023). Vulnerability of Clean-Label Poisoning Attack for Object Detection in Maritime Autonomous Surface Ships. Journal of Marine Science and Engineering, 11(6). https://doi.org/10.3390/jmse11061179

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free