This paper investigates the ethical implications of using adversarial machine learning for the purpose of obfuscation. We suggest that adversarial attacks can be justified by privacy considerations but that they can also cause collateral damage. To clarify the matter, we employ two use cases—facial recognition and medical machine learning—to evaluate the collateral damage counterarguments to privacy-induced adversarial attacks. We conclude that obfuscation by data poisoning can be justified in facial recognition but not in the medical case. We motivate our conclusion by employing psychological arguments about change, privacy considerations, and purpose limitations on machine learning applications.
CITATION STYLE
Adomaitis, L., & Oak, R. (2023). Ethics of Adversarial Machine Learning and Data Poisoning. Digital Society, 2(1). https://doi.org/10.1007/s44206-023-00039-1
Mendeley helps you to discover research relevant for your work.