The opacity of artificial intelligence makes it hard to tell when decision-making is biased

12Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

If you're on Facebook, click on 'Why am I seeing this ad?' The answer will look something like '[Advertiser] wants to reach people who may be similar to their customers' or '[Advertiser] is trying to reach people ages 18 and older' or '[Advertiser] is trying to reach people whose primary location is the United States.' Oh, you'll also see 'There could also be more factors not listed here.' Such explanations started appearing on Facebook in response to complaints about the platform's ad-placing artificial-intelligence (AI) system. For many people, it was their first encounter with the growing trend of explainable AI, or XAI.

Cite

CITATION STYLE

APA

Hutson, M. (2021). The opacity of artificial intelligence makes it hard to tell when decision-making is biased. IEEE Spectrum, 58(2), 40–45. https://doi.org/10.1109/MSPEC.2021.9340114

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free