A Survey on Gradient Inversion: Attacks, Defenses and Future Directions

10Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Recent studies have shown that the training samples can be recovered from gradients, which are called Gradient Inversion (GradInv) attacks. However, there remains a lack of extensive surveys covering recent advances and thorough analysis of this issue. In this paper, we present a comprehensive survey on GradInv, aiming to summarize the cutting-edge research and broaden the horizons for different domains. Firstly, we propose a taxonomy of GradInv attacks by characterizing existing attacks into two paradigms: iteration- and recursion-based attacks. In particular, we dig out some critical ingredients from the iteration-based attacks, including data initialization, model training and gradient matching. Second, we summarize emerging defense strategies against GradInv attacks. We find these approaches focus on three perspectives covering data obscuration, model improvement and gradient protection. Finally, we discuss some promising directions and open problems for further research.

Cite

CITATION STYLE

APA

Zhang, R., Guo, S., Wang, J., Xie, X., & Tao, D. (2022). A Survey on Gradient Inversion: Attacks, Defenses and Future Directions. In IJCAI International Joint Conference on Artificial Intelligence (pp. 5678–5685). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/791

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free