Adversarial Machine Learning: Bayesian Perspectives

6Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Adversarial Machine Learning (AML) is emerging as a major field aimed at protecting Machine Learning (ML) systems against security threats: in certain scenarios there may be adversaries that actively manipulate input data to fool learning systems. This creates a new class of security vulnerabilities that ML systems may face, and a new desirable property called adversarial robustness essential to trust operations based on ML outputs. Most work in AML is built upon a game-theoretic modeling of the conflict between a learning system and an adversary, ready to manipulate input data. This assumes that each agent knows their opponent’s interests and uncertainty judgments, facilitating inferences based on Nash equilibria. However, such common knowledge assumption is not realistic in the security scenarios typical of AML. After reviewing such game-theoretic approaches, we discuss the benefits that Bayesian perspectives provide when defending ML-based systems. We demonstrate how the Bayesian approach allows us to explicitly model our uncertainty about the opponent’s beliefs and interests, relaxing unrealistic assumptions, and providing more robust inferences. We illustrate this approach in supervised learning settings, and identify relevant future research problems. Supplementary materials for this article are available online.

Cite

CITATION STYLE

APA

Rios Insua, D., Naveiro, R., Gallego, V., & Poulos, J. (2023). Adversarial Machine Learning: Bayesian Perspectives. Journal of the American Statistical Association. Taylor and Francis Ltd. https://doi.org/10.1080/01621459.2023.2183129

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free