Abstract
The EU Commission’s proposal for the Regulation on Artificial Intelligence, whilst providing important specifications on the importance of transparency of high-risk systems, falls short in providing a nuanced picture of how technical safeguards in Articles 13 and 14 in the proposal should be translated to AI systems operating on the ground. This paper focusing on medical diagnostic systems offers a perspective on how transparency safeguards should be applied in practice, considering the role of post hoc explainability and Uncertainty Estimates in medical imaging. Medical diagnostic systems offer probabilistic judgements regarding disease classification tasks, having an impact on the interactive experience between the doctor and the patient. Accordingly, we need additional guidance regarding Articles 13 and 14 in the proposal, considering the role of shared decision-making, and patient autonomy in healthcare and to ensure that technical safeguards secure medical diagnostic systems that are a safe, reliable, and trustworthy.
Author supplied keywords
Cite
CITATION STYLE
Onitiu, D. (2023). The limits of explainability & human oversight in the EU Commission’s proposal for the Regulation on AI- a critical approach focusing on medical diagnostic systems. Information and Communications Technology Law, 32(2), 170–188. https://doi.org/10.1080/13600834.2022.2116354
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.