When AI Is Wrong: Addressing Liability Challenges in Women’s Healthcare

5Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Healthcare professionals can leverage Artificial intelligence (AI) to provide better care for their patients. However, it is also necessary to consider that AI algorithms operate according to historical diagnostic data, which often include evidence gathered from men. The biases of prior practices and the perpetuation of exclusionary processes toward women can lead to inaccurate medical decisions. The ramifications of such errors show that the incorrect use of AI raises several critical questions regarding who should be responsible for potential incidents. This study aims to provide an analysis of the role of AI in affecting women’s healthcare and an overview of the liability implications caused by AI mistakes. Finally, this work presents a framework for algorithmic auditing to ensure that AI data are collected and stored according to secure, legal, and fair practices.

Cite

CITATION STYLE

APA

Marotta, A. (2022). When AI Is Wrong: Addressing Liability Challenges in Women’s Healthcare. Journal of Computer Information Systems, 62(6), 1310–1319. https://doi.org/10.1080/08874417.2022.2089773

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free