Background Although numerous studies have shown the potential of artificial intelligence (AI) systems in drastically improving clinical practice, there are concerns that these AI systems could replicate existing biases. Objective This paper provides a brief overview of ‘algorithmic bias’, which refers to the tendency of some AI systems to perform poorly for disadvantaged or marginalised groups. Discussion AI relies on data generated, collected, recorded and labelled by humans. If AI systems remain unchecked, whatever biases that exist in the real world that are embedded in data will be incorporated into the AI algorithms. Algorithmic bias can be considered as an extension, if not a new manifestation, of existing social biases, understood as negative attitudes towards or the discriminatory treatment of some groups. In medicine, algorithmic bias can compromise patient safety and risks perpetuating disparities in care and outcome. Thus, clinicians should consider the risk of bias when deploying AI-enabled tools in their practice.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Aquino, Y. S. J. (2023). Making decisions; Bias in artificial intelligence and data-driven diagnostic tools. Australian Journal of General Practice, 52(7), 439–442. https://doi.org/10.31128/AJGP-12-22-6630