Direct-to-consumer medical artificial intelligence/machine learning applications are increasingly used for a variety of diagnostic assessments, and the emphasis on telemedicine and home healthcare during the COVID-19 pandemic may further stimulate their adoption. In this Perspective, we argue that the artificial intelligence/machine learning regulatory landscape should operate differently when a system is designed for clinicians/doctors as opposed to when it is designed for personal use. Direct-to-consumer applications raise unique concerns due to the nature of consumer users, who tend to be limited in their statistical and medical literacy and risk averse about their health outcomes. This creates an environment where false alarms can proliferate and burden public healthcare systems and medical insurers. While similar situations exist elsewhere in medicine, the ease and frequency with which artificial intelligence/machine learning apps can be used, and their increasing prevalence in the consumer market, calls for careful reflection on how to effectively regulate them. We suggest regulators should strive to better understand how consumers interact with direct-to-consumer medical artificial intelligence/machine learning apps, particularly diagnostic ones, and this requires more than a focus on the system’s technical specifications. We further argue that the best regulatory review would also consider such technologies’ social costs under widespread use.
CITATION STYLE
Babic, B., Gerke, S., Evgeniou, T., & Cohen, I. G. (2021, April 1). Direct-to-consumer medical machine learning and artificial intelligence applications. Nature Machine Intelligence. Nature Research. https://doi.org/10.1038/s42256-021-00331-0
Mendeley helps you to discover research relevant for your work.