On managing vulnerabilities in AI/ML systems

15Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper explores how the current paradigm of vulnerability management might adapt to include machine learning systems through a thought experiment: what if flaws in machine learning (ML) were assigned Common Vulnerabilities and Exposures (CVE) identifiers (CVE-IDs)? We consider both ML algorithms and model objects. The hypothetical scenario is structured around exploring the changes to the six areas of vulnerability management: discovery, report intake, analysis, coordination, disclosure, and response. While algorithm flaws are well-known in academic research community, there is no apparent clear line of communication between this research community and the operational communities that deploy and manage systems that use ML. The thought experiments identify some ways in which CVE-IDs may establish some useful lines of communication between these two communities. In particular, it would start to introduce the research community to operational security concepts, which appears to be a gap left by existing efforts.

Cite

CITATION STYLE

APA

Spring, J. M., Galyardt, A., Householder, A. D., & Vanhoudnos, N. (2021). On managing vulnerabilities in AI/ML systems. In ACM International Conference Proceeding Series (pp. 111–126). Association for Computing Machinery. https://doi.org/10.1145/3442167.3442177

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free