Ethics of Adversarial Machine Learning and Data Poisoning

  • Adomaitis L
  • Oak R
N/ACitations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper investigates the ethical implications of using adversarial machine learning for the purpose of obfuscation. We suggest that adversarial attacks can be justified by privacy considerations but that they can also cause collateral damage. To clarify the matter, we employ two use cases—facial recognition and medical machine learning—to evaluate the collateral damage counterarguments to privacy-induced adversarial attacks. We conclude that obfuscation by data poisoning can be justified in facial recognition but not in the medical case. We motivate our conclusion by employing psychological arguments about change, privacy considerations, and purpose limitations on machine learning applications.

Cite

CITATION STYLE

APA

Adomaitis, L., & Oak, R. (2023). Ethics of Adversarial Machine Learning and Data Poisoning. Digital Society, 2(1). https://doi.org/10.1007/s44206-023-00039-1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free