Abstract
Pre-trained Deep Neural Network (DNN) models are increasingly used in smartphones and other user devices to enable prediction services, leading to potential disclosures of (sensitive) information from training data captured inside these models. Based on the concept of generalization error, we propose a framework to measure the amount of sensitive information memorized in each layer of a DNN. Our results show that, when considered individually, the last layers encode a larger amount of information from the training data compared to the irst layers. We ind that the same DNN architecture trained with diferent datasets has similar exposure per layer. We evaluate an architecture to protect the most sensitive layers within an on-device Trusted Execution Environment (TEE) against potential white-box membership inference attacks without the signiicant computational overhead.
Author supplied keywords
Cite
CITATION STYLE
Mo, F., Shamsabadi, A. S., Katevas, K., Cavallaro, A., & Haddadi, H. (2019). Poster: Towards characterizing and limiting information exposure in DNN layers. In Proceedings of the ACM Conference on Computer and Communications Security (pp. 2653–2655). Association for Computing Machinery. https://doi.org/10.1145/3319535.3363279
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.