Abstract
Introduction Artificial intelligence (AI) has transformative potential in healthcare, promising advancements in diagnostics, treatment, and patient management, attracting significant investments and policy efforts globally. Effective AI governance, comprising guidelines, policy papers, and regulations, is crucial for its successful integration. Methods This study evaluates 10 AI policies, namely focusing on 5 international organizations: the United Nations, the Organisation for Economic Co-operation and Development (OECD), the Council of Europe, the G20, and UNESCO, and 5 regional/national entities: Brazil, the United States, the European Union (EU), China, and the United Kingdom, to highlight the implications of AI governance for healthcare. Results The EU AI Act focuses on risk management and individual protection while fostering innovation aligned with European values. The United Kingdom and the United States adopt a more flexible approach, offering guidelines to stimulate rapid AI integration and innovation without imposing strict regulations. Brazil shows a convergence toward the EU's risk-based approach. Conclusions The study explores the normative implications of these varied approaches. The EU's stringent regulations may ensure higher safety and ethical standards, potentially setting a global benchmark, but they could also hinder innovation and pose compliance challenges. The United Kingdom's lenient approach may drive faster AI adoption and competitiveness but risks inconsistencies in safety and ethics. The study concludes by offering recommendations for future research.
Author supplied keywords
Cite
CITATION STYLE
Mazzi, F. (2025). Evaluating the normative implications of national and international artificial intelligence policies for Sustainable Development Goal 3: good health and well-being. Health Affairs Scholar, 3(6). https://doi.org/10.1093/haschl/qxaf108
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.