Artificial intelligence holds great promise in terms of beneficial, accurate and effective preventive and curative interventions. At the same time, there is also awareness of potential risks and harm that may be caused by unregulated developments of artificial intelligence. Guiding principles are being developed around the world to foster trustworthy development and application of artificial intelligence systems. These guidelines can support developers and governing authorities when making decisions about the use of artificial intelligence. The HighLevel Expert Group on Artificial Intelligence set up by the European Commission launched the report Ethical guidelines for trustworthy artificial intelligence in 2019. The report aims to contribute to reflections and the discussion on the ethics of artificial intelligence technologies also beyond the countries of the European Union (EU). In this paper, we use the global health sector as a case and argue that the EU’s guidance leaves too much room for local, contextualized discretion for it to foster trustworthy artificial intelligence globally. We point to the urgency of shared globalized efforts to safeguard against the potential harms of artificial intelligence technologies in health care.
CITATION STYLE
Bærøe, K., Miyata-Sturm, A., & Henden, E. (2020). How to achieve trustworthy artificial intelligence for health. Bulletin of the World Health Organization, 98(4), 257–262. https://doi.org/10.2471/BLT.19.237289
Mendeley helps you to discover research relevant for your work.