Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective

6Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

As machine learning (ML) technologies and applications are rapidly changing many computing domains, security issues associated with ML are also emerging. In the domain of systems security, many endeavors have been made to ensure ML model and data confidentiality. ML computations are often inevitably performed in untrusted environments and entail complex multi-party security requirements. Hence, researchers have leveraged the Trusted Execution Environments (TEEs) to build confidential ML computation systems. We conduct a systematic and comprehensive survey by classifying attack vectors and mitigation in confidential ML computation in untrusted environments, analyzing the complex security requirements in multi-party scenarios, and summarizing engineering challenges in confidential ML implementation. Lastly, we suggest future research directions based on our study.

Cite

CITATION STYLE

APA

Duy, K. D., Noh, T., Huh, S., & Lee, H. (2021). Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective. IEEE Access, 9, 168656–168677. https://doi.org/10.1109/ACCESS.2021.3136889

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free