The Article 29 Data Protection Working Party's recent draft guidance on automated decision-making and profiling seeks to clarify European data protection (DP) law's little-used right to prevent automated decision-making, as well as the provisions around profiling more broadly, in the run-up to the General Data Protection Regulation. In this paper, we analyse these new guidelines in the context of recent scholarly debates and technological concerns. They foray into the less-trodden areas of bias and non-discrimination, the significance of advertising, the nature of “solely” automated decisions, impacts upon groups and the inference of special categories of data—at times, appearing more to be making or extending rules than to be interpreting them. At the same time, they provide only partial clarity – and perhaps even some extra confusion – around both the much discussed “right to an explanation” and the apparent prohibition on significant automated decisions concerning children. The Working Party appears to feel less mandated to adjudicate in these conflicts between the recitals and the enacting articles than to explore altogether new avenues. Nevertheless, the directions they choose to explore are particularly important ones for the future governance of machine learning and artificial intelligence in Europe and beyond.
Veale, M., & Edwards, L. (2018). Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling. Computer Law and Security Review, 34(2), 398–404. https://doi.org/10.1016/j.clsr.2017.12.002