Lasagna: Accelerating secure deep learning inference in SGX-enabled edge cloud

24Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Edge intelligence has already been widely regarded as a key enabling technology in a variety of domains. Along with the prosperity, increasing concern is raised on the security and privacy of intelligent applications. As these applications are usually deployed on shared and untrusted edge servers, malicious co-located attackers, or even untrustworthy infrastructure providers, may acquire highly security-sensitive data and code (i.e., the pre-trained model). Software Guard Extensions (SGX) provides an isolated Trust Execution Environment (TEE) for task security guarantee. However, we notice that DNN inference performance in SGX is severely affected by the limited enclave memory space due to the resultant frequent page swapping operations and the high enclave call overhead. To tackle this problem, we propose Lasagna, an SGX oriented DNN inference performance acceleration framework without compromising the task security. Lasagna consists of a local task scheduler and a global task balancer to optimize the system performance by exploring the layered-structure of DNN models. Our experiment results show that our layer-aware Lasagna effectively speeds up the well-known DNN inference in SGX by 1.31x-1.97x.

Cite

CITATION STYLE

APA

Li, Y., Zeng, D., Gu, L., Chen, Q., Guo, S., Zomaya, A., & Guo, M. (2021). Lasagna: Accelerating secure deep learning inference in SGX-enabled edge cloud. In SoCC 2021 - Proceedings of the 2021 ACM Symposium on Cloud Computing (pp. 533–545). Association for Computing Machinery, Inc. https://doi.org/10.1145/3472883.3486988

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free