Neuro-Symbolic Verification of Deep Neural Networks

7Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

Formal verification has emerged as a powerful approach to ensure the safety and reliability of deep neural networks. However, current verification tools are limited to only a handful of properties that can be expressed as first-order constraints over the inputs and output of a network. While adversarial robustness and fairness fall under this category, many real-world properties (e.g., “an autonomous vehicle has to stop in front of a stop sign”) remain outside the scope of existing verification technology. To mitigate this severe practical restriction, we introduce a novel framework for verifying neural networks, named neuro-symbolic verification. The key idea is to use neural networks as part of the otherwise logical specification, enabling the verification of a wide variety of complex, real-world properties, including the one above. A defining feature of our framework is that it can be implemented on top of existing verification infrastructure for neural networks, making it easily accessible to researchers and practitioners.

Cite

CITATION STYLE

APA

Xie, X., Kersting, K., & Neider, D. (2022). Neuro-Symbolic Verification of Deep Neural Networks. In IJCAI International Joint Conference on Artificial Intelligence (pp. 3622–3628). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/503

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free