Algorithmic paranoia and the convivial alternative

21Citations
Citations of this article
115Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In a time of big data, thinking about how we are seen and how that affects our lives means changing our idea about who does the seeing. Data produced by machines is most often ‘seen’ by other machines; the eye is in question is algorithmic. Algorithmic seeing does not produce a computational panopticon but a mechanism of prediction. The authority of its predictions rests on a slippage of the scientific method in to the world of data. Data science inherits some of the problems of science, especially the disembodied ‘view from above’, and adds new ones of its own. As its core methods like machine learning are based on seeing correlations not understanding causation, it reproduces the prejudices of its input. Rising in to the apparatuses of governance, it reinforces the problematic sides of ‘seeing like a state’ and links to the recursive production of paranoia. It forces us to ask the question ‘what counts as rational seeing?’. Answering this from a position of feminist empiricism reveals different possibilities latent in seeing with machines. Grounded in the idea of conviviality, machine learning may reveal forgotten non-market patterns and enable free and critical learning. It is proposed that a programme to challenge the production of irrational pre-emption is also a search for the possibility of algorithmic conviviality.

Cite

CITATION STYLE

APA

McQuillan, D. (2016). Algorithmic paranoia and the convivial alternative. Big Data and Society, 3(2). https://doi.org/10.1177/2053951716671340

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free