Abstract
This article analyses the EU's regulation of medical artificial intelligence (AI) from a product safety perspective, concentrating on the interplay between the proposed AI Act (AIA) and the Medical Device Regulation (MDR). Recent advances in AI development illustrate the future potential of generative AI technologies, including those based on Large LanguageModels (LLMs). In a medical context, AI systems with different degrees of generativity are conceivable. These AI systems can pose new types of risks that are specific to AI technologies, as well as more traditional risks that are typical of medical devices. The proposed AIA is intended to address AI-specific risks foreseen by the EU legislature, whereas the MDR addresses more traditional medical risks. Through two case studies which display different degrees of generativity, this article identifies regulatory lacunae in the intersection between the AIA and the MDR. The article suggests that the emerging regulatory framework for medical AI systems potentially leaves certain AI-specific risks as well as certain typical medical device risks unregulated. Finally, the article discusses possible solutions that are compatible with the intentions of the EU legislature pertaining to the regulation of medical AI systems.
Author supplied keywords
Cite
CITATION STYLE
Hauglid, M. K., & Mahler, T. (2023). Doctor Chatbot: The EU’s Regulatory Prescription for Generative Medical AI. Oslo Law Review, 10(1), 1–23. https://doi.org/10.18261/olr.10.1.1
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.