Balancing XAI with Privacy and Security Considerations

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The acceptability of AI decisions and the efficiency of AI-human interaction become particularly significant when AI is incorporated into Critical Infrastructures (CI). To achieve this, eXplainable AI (XAI) modules must be integrated into the AI workflow. However, by design, XAI reveals the inner workings of AI systems, posing potential risks for privacy leaks and enhanced adversarial attacks. In this literature review, we explore the complex interplay of explainability, privacy, and security within trustworthy AI, highlighting inherent trade-offs and challenges. Our research reveals that XAI leads to privacy leaks and increases susceptibility to adversarial attacks. We categorize our findings according to XAI taxonomy classes and provide a concise overview of the corresponding fundamental concepts. Furthermore, we discuss how XAI interacts with prevalent privacy defenses and addresses the unique requirements of the security domain. Our findings contribute to the growing literature on XAI in the realm of CI protection and beyond, paving the way for future research in the field of trustworthy AI.

Cite

CITATION STYLE

APA

Spartalis, C. N., Semertzidis, T., & Daras, P. (2024). Balancing XAI with Privacy and Security Considerations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14399 LNCS, pp. 111–124). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-54129-2_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free