Abstract
Deep neural networks (DNNs) have demonstrated their outper-formance in various domains. However, it raises a social concern whether DNNs can produce reliable and fair decisions especially when they are applied to sensitive domains involving valuable re-source allocation, such as education, loan, and employment. It is crucial to conduct fairness testing before DNNs are reliably de-ployed to such sensitive domains, i.e., generating as many instances as possible to uncover fairness violations. However, the existing testing methods are still limited from three aspects: interpretabil-ity, performance, and generalizability. To overcome the challenges, we propose NeuronFair, a new DNN fairness testing framework that differs from previous work in several key aspects: (1) inter-pretable - it quantitatively interprets DNNs' fairness violations for the biased decision; (2) effective - it uses the interpretation results to guide the generation of more diverse instances in less time; (3) generic - it can handle both structured and unstructured data. Extensive evaluations across 7 datasets and the corresponding DNNs demonstrate NeuronFair's superior performance. For instance, on structured datasets, it generates much more instances ( ×5.84) and saves more time (with an average speedup of 534.56%) compared with the state-of-the-art methods. Besides, the instances of NeuronFair can also be leveraged to improve the fairness of the biased DNNs, which helps build more fair and trustworthy deep learning systems. The code of NeuronFair is open-sourced at https:/github.com/haibinzheng/NeuronFair.
Author supplied keywords
Cite
CITATION STYLE
Zheng, H., Chen, Z., Du, T., Zhang, X., Cheng, Y., Ti, S., … Chen, J. (2022). NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification. In Proceedings - International Conference on Software Engineering (Vol. 2022-May, pp. 1519–1531). IEEE Computer Society. https://doi.org/10.1145/3510003.3510123
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.