We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees, that is, a notion of measurable trustworthiness, in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).
CITATION STYLE
Zühlke, M.-M., & Kudenko, D. (2024). Adversarial Robustness of Neural Networks From the Perspective of Lipschitz Calculus: A Survey. ACM Computing Surveys. https://doi.org/10.1145/3648351
Mendeley helps you to discover research relevant for your work.