Evaluating Deep Learning Biases Based on Grey-Box Testing Results

6Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The very exciting and promising approaches of deep learning are immensely successful in processing large real world data sets, such as image recognition, speech recognition, and language translation. However, much research discovered that it has biases that arise in the design, production, deployment, and use of AI/ML technologies. In this paper, we first explain mathematically the causes of biases and then propose a way to evaluate biases based on testing results of neurons and auto-encoders in deep learning. Our interpretation views each neuron or autoencoder as an approximation of similarity measurement, of which grey-box testing results can be used to measure biases and finding ways to reduce them. We argue that monitoring deep learning network structures and parameters is an effective way to catch the sources of biases in deep learning.

Cite

CITATION STYLE

APA

Jenny Li, J., Silva, T., Franke, M., Hai, M., & Morreale, P. (2021). Evaluating Deep Learning Biases Based on Grey-Box Testing Results. In Advances in Intelligent Systems and Computing (Vol. 1250 AISC, pp. 641–651). Springer. https://doi.org/10.1007/978-3-030-55180-3_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free