The Right Not to Be Subjected to AI Profiling Based on Publicly Available Data—Privacy and the Exceptionalism of AI Profiling

8Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Social media data hold considerable potential for predicting health-related conditions. Recent studies suggest that machine-learning models may accurately predict depression and other mental health-related conditions based on Instagram photos and Tweets. In this article, it is argued that individuals should have a sui generis right not to be subjected to AI profiling based on publicly available data without their explicit informed consent. The article (1) develops three basic arguments for a right to protection of personal data trading on the notions of social control and stigmatization, (2) argues that a number of features of AI profiling make individuals more exposed to social control and stigmatization than other types of data processing (the exceptionalism of AI profiling), (3) considers a series of other reasons for and against protecting individuals against AI profiling based on publicly available data, and finally (4) argues that the EU General Data Protection Regulation does not ensure that individuals have a right not to be AI profiled based on publicly available data.

Cite

CITATION STYLE

APA

Ploug, T. (2023). The Right Not to Be Subjected to AI Profiling Based on Publicly Available Data—Privacy and the Exceptionalism of AI Profiling. Philosophy and Technology, 36(1). https://doi.org/10.1007/s13347-023-00616-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free