Managing bias in AI

198Citations
Citations of this article
370Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent awareness of the impacts of bias in AI algorithms raises the risk for companies to deploy such algorithms, especially because the algorithms may not be explainable in the same way that non-AI algorithms are. Even with careful review of the algorithms and data sets, it may not be possible to delete all unwanted bias, particularly because AI systems learn from historical data, which encodes historical biases. In this paper, we propose a set of processes that companies can use to mitigate and manage three general classes of bias: those related to mapping the business intent into the AI implementation, those that arise due to the distribution of samples used for training, and those that are present in individual input samples. While there may be no simple or complete solution to this issue, best practices can be used to reduce the effects of bias on algorithmic outcomes.

Cite

CITATION STYLE

APA

Roselli, D., Matthews, J., & Talagala, N. (2019). Managing bias in AI. In The Web Conference 2019 - Companion of the World Wide Web Conference, WWW 2019 (pp. 539–544). Association for Computing Machinery, Inc. https://doi.org/10.1145/3308560.3317590

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free