Artificial Intelligence (AI) is one of the most significant of the information and communications technologies being applied to surveillance. AI's proponents argue that its promise is great, and that successes have been achieved, whereas its detractors draw attention to the many threats embodied in it, some of which are much more problematic than those arising from earlier data analytical tools. This article considers the full gamut of regulatory mechanisms. The scope extends from natural and infrastructural regulatory mechanisms, via self-regulation, including the recently-popular field of †ethical principles', to co-regulatory and formal approaches. An evaluation is provided of the adequacy or otherwise of the world's first proposal for formal regulation of AI practices and systems, by the European Commission. To lay the groundwork for the analysis, an overview is provided of the nature of AI. The conclusion reached is that, despite the threats inherent in the deployment of AI, the current safeguards are seriously inadequate, and the prospects for near-future improvement are far from good. To avoid undue harm from AI applications to surveillance, it is necessary to rapidly enhance existing, already-inadequate safeguards and establish additional protections.
CITATION STYLE
Clarke, R. (2022). Responsible application of artificial intelligence to surveillance: What prospects? Information Polity, 27(2), 175–191. https://doi.org/10.3233/IP-211532
Mendeley helps you to discover research relevant for your work.