Poster: Towards characterizing and limiting information exposure in DNN layers

3Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Pre-trained Deep Neural Network (DNN) models are increasingly used in smartphones and other user devices to enable prediction services, leading to potential disclosures of (sensitive) information from training data captured inside these models. Based on the concept of generalization error, we propose a framework to measure the amount of sensitive information memorized in each layer of a DNN. Our results show that, when considered individually, the last layers encode a larger amount of information from the training data compared to the irst layers. We ind that the same DNN architecture trained with diferent datasets has similar exposure per layer. We evaluate an architecture to protect the most sensitive layers within an on-device Trusted Execution Environment (TEE) against potential white-box membership inference attacks without the signiicant computational overhead.

Cite

CITATION STYLE

APA

Mo, F., Shamsabadi, A. S., Katevas, K., Cavallaro, A., & Haddadi, H. (2019). Poster: Towards characterizing and limiting information exposure in DNN layers. In Proceedings of the ACM Conference on Computer and Communications Security (pp. 2653–2655). Association for Computing Machinery. https://doi.org/10.1145/3319535.3363279

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free