Incorporating the Concept of Bias and Fairness in Cybersecurity Curricular Module

0Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Although Artificial Intelligence has become an integral part of modern cybersecurity solutions, data bias and algorithmic bias have made it vulnerable to many cyberattacks, in particular, adversarial attacks where the attacker crafts input of the AI system to exploit the existence of possible bias in the data or algorithms. In this paper, we share our experiences with ongoing work to develop and evaluate a cybersecurity curricular module that demonstrates (a) data bias detection, (b) data bias mitigation, (c) algorithmic bias detection, and (d) algorithmic bias mitigation, using a network intrusion detection problem on real-world data. The module includes lectures and hands-on exercises, using state-of-The-Art and open-source bias detection and mitigation software on a real-world dataset. The goal is to identify and mitigate the prevailing conscious/unconscious bias in data and algorithms that the attacker might exploit.

Cite

CITATION STYLE

APA

Islam, S. R., Russell, I., & Gupta, M. (2023). Incorporating the Concept of Bias and Fairness in Cybersecurity Curricular Module. In SIGCSE 2023 - Proceedings of the 54th ACM Technical Symposium on Computer Science Education (Vol. 2, p. 1358). Association for Computing Machinery, Inc. https://doi.org/10.1145/3545947.3576302

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free