Against explainability requirements for ethical artificial intelligence in health care

  • Kawamleh S
N/ACitations
Citations of this article
32Readers
Mendeley users who have this article in their library.
Get full text

Abstract

It is widely accepted that explainability is a requirement for the ethical use of artificial intelligence (AI) in health care. I challenge this Explainability Imperative (EI) by considering the following question: does the use of epistemically opaque medical AI systems violate existing legal standards for informed consent? If yes, and if the failure to meet such standards can be attributed to epistemic opacity, then explainability is a requirement for AI in healthcare. If not, then based on at least one metric of ethical medical practice (informed consent), explainability is not required for the ethical use of AI in healthcare. First, I show that the use of epistemically opaque AI applications is compatible with meeting accepted legal criteria for informed consent. Second, I argue that human experts are also black boxes with respect to the criteria by which they arrive at a diagnosis. Human experts can nonetheless meet established requirements for informed consent. I conclude that the use of black-box AI systems does not violate patients’ rights to informed consent, and thus, with respect to informed consent, explainability is not required for medical AI.

Cite

CITATION STYLE

APA

Kawamleh, S. (2023). Against explainability requirements for ethical artificial intelligence in health care. AI and Ethics, 3(3), 901–916. https://doi.org/10.1007/s43681-022-00212-1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free