Artificial intelligence (AI) is being rapidly integrated into healthcare with a naïve belief in the objectivity of AI and a complacent trust in the omniscience of computational knowledge. While AI has the potential to transform healthcare, there are significant ethical and safety concerns. The pace of AI development and the race for AI supremacy is leading to a rapid, and largely unregulated, proliferation of AI applications. It is important to understand that AI technologies bring new and accelerated risks and need meaningful human control and oversight. However, standards and regulation in the field are at a very nascent stage and need urgent attention. This paper explores the issues related to reliability, transparency, bias, and ethics to illustrate the ground realities and makes a case for developing standards and regulatory frameworks for the safe, effective, and ethical use of AI in healthcare.
CITATION STYLE
Pathni, R. K. (2023). Artificial Intelligence and the Myth of Objectivity. Journal of Healthcare Management Standards, 3(1), 1–14. https://doi.org/10.4018/jhms.329234
Mendeley helps you to discover research relevant for your work.