White-box fairness testing through adversarial sampling

84Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Although deep neural networks (DNNs) have demonstrated astonishing performance in many applications, there are still concerns on their dependability. One desirable property of DNN for applications with societal impact is fairness (i.e., non-discrimination). In this work, we propose a scalable approach for searching individual discriminatory instances of DNN. Compared with state-of-theart methods, our approach only employs lightweight procedures like gradient computation and clustering, which makes it significantly more scalable than existing methods. Experimental results show that our approach explores the search space more effectively (9 times) and generates much more individual discriminatory instances (25 times) using much less time (half to 1/7) .

Cite

CITATION STYLE

APA

Zhang, P., Wang, J., Sun, J., Dong, G., Wang, X., Wang, X., … Dai, T. (2020). White-box fairness testing through adversarial sampling. In Proceedings - International Conference on Software Engineering (pp. 949–960). IEEE Computer Society. https://doi.org/10.1145/3377811.3380331

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free