Adversarial Robustness of Neural Networks From the Perspective of Lipschitz Calculus: A Survey

  • Zühlke M
  • Kudenko D
N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees, that is, a notion of measurable trustworthiness, in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).

Cite

CITATION STYLE

APA

Zühlke, M.-M., & Kudenko, D. (2024). Adversarial Robustness of Neural Networks From the Perspective of Lipschitz Calculus: A Survey. ACM Computing Surveys. https://doi.org/10.1145/3648351

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free