Abstract
As Artificial Intelligence (AI) systems are increasingly deployed in high-stakes domains such as healthcare, autonomous systems, finance, and critical infrastructure, ensuring their trustworthiness has become imperative. This paper presents a comprehensive survey of neuro-symbolic AI, a hybrid paradigm that combines the learning capabilities of neural networks with the reasoning strengths of symbolic AI, through the lens of three foundational dimensions: robustness, uncertainty quantification (UQ), and intervenability. We first establish the limitations of purely data-driven “black-box” models in handling distribution shifts, ambiguous inputs, and human oversight. In contrast, neuro-symbolic systems offer enhanced interpretability, verifiability, and control, making them promising candidates for real-world deployment. We systematically review state-of-the-art techniques for modeling robustness, quantifying uncertainty, and enabling intervenability. We further examine how logic, probability, and learning can be integrated into unified or modular architectures to support transparent, adaptive reasoning. Finally, we outline current challenges and identify key research opportunities for advancing neuro-symbolic AI as a trustworthy paradigm. This survey aims to equip researchers and practitioners with a structured understanding of how to build reliable, interpretable, and interactive AI systems by bridging statistical learning and symbolic reasoning.
Author supplied keywords
Cite
CITATION STYLE
Acharya, K., & Song, H. (2025, January 1). A Comprehensive Review of Neuro-symbolic AI for Robustness, Uncertainty Quantification, and Intervenability. Arabian Journal for Science and Engineering. Springer Nature. https://doi.org/10.1007/s13369-025-10887-3
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.