Privacy sensitive large-margin model for face de-identification

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There is an increasing concern of face privacy protection along with the wide application of big media data and social networks due to free online data release. Although some pioneering works obtained some achievements, they are not sufficient enough for sanitizing the sensitive identity information. In this paper, we propose a generative approach to de-identify face images yet preserving the non-sensitive information for data reusability. To ensure a high privacy level, we introduce a large-margin model for the synthesized new identities by keeping a safe distance with both the input identity and existing identities. Besides, we show that our face de-identification operation follows the (Formula Presented)-differential privacy rule which can provide a rigorous privacy notion in theory. We evaluate the proposed approach using the vggface dataset and compare with several state-of-the-art methods. The results show that our approach outperforms previous solutions for effective face privacy protection while preserving the major utilities.

Cite

CITATION STYLE

APA

Guo, Z., Liu, H., Kuang, Z., Nakashima, Y., & Babaguchi, N. (2020). Privacy sensitive large-margin model for face de-identification. In Communications in Computer and Information Science (Vol. 1265 CCIS, pp. 488–501). Springer. https://doi.org/10.1007/978-981-15-7670-6_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free