Anticipatory regulatory instruments are pre-emptive approaches to identify and anticipate risks arising from new technologies. They can also act as indicators of 'pro-innovation' economic support for digital technologies. The extent to which regulatory agencies can fulfil their regulatory remit, aimed at the protection of the public good, and signal support for innovative and disruptive technologies is an open policy question. Regulatory sandbox schemes are comparatively new anticipatory tools, operating within a small number of regulators, and their potential to assess contextual or cross-sectoral risk is unclear. However, emerging proposals for the regulation of AI increasing feature various models of regulatory sandboxes often aligned to the need to reduce access barriers for SMEs and innovators. Examples include the European Commission's Proposal for a regulation concerning AI [3] and the recent United Kingdom AI White Paper, AI Regulation: A Pro-Innovation Approach [8]. Disentangling the causal dimensions of why regulatory sandboxes are proposed to regulate AI, and their utility as tools of pre-emptive risk assessment are my core research questions. The regulation of emerging digital technologies present challenges for regulators and governments in monitoring rapid global developments and in anticipating novel forms of risk [9]. Nesta introduced the term anticipatory regulation, and such approaches potentially provide 'a set of behaviours and tools - i.e., a way of working - that is intended to help regulators identify, build and test solutions to emerging challenges' [4]. Regulatory sandboxes are a prominent, and arguably the most widespread, example of such an anticipatory regulatory tool. Whilst there are varied definitions of regulatory sandbox schemes, existing schemes allow small-scale, live testing of innovations in a controlled environment under the supervision of a regulatory authority [6]. A small number of regulatory sandbox schemes are in operation within the UK operating within sectoral and cross-sectoral regulatory remits. However, empirical data and academic literature regarding the methodologies and operation of these current schemes, and literature exploring regulatory sandboxes more broadly, is scarce [7, 10]. The ontological focus of my work is critical realist, which accepts the external reality of the design and instrumental aims of sandbox schemes, whilst seeking to understand the underlying causes and drivers for their use and rapid promotion. To locate such causes and explanations it is necessary to examine existing schemes within the 'rules and norms' of their institutional context and structures [1, 2]. Institutional analysis will isolate the key dimensions of each scheme, consider the influence of the regulatory structures, and then test such analysis through empirical research with regulatory and policy actors. The core hypothesis of my research is that regulatory contexts, path dependencies and conceptions of risk are significant causal elements within existing sandbox schemes and, as such, may present a challenge when designing and deploying cross-sectoral sandbox schemes for AI systems. I have already undertaken analysis of the two regulatory sandbox schemes applying the Institutional Analysis and Development framework of Elinor Ostrom [5]. This analysis has highlighted significant dimensions of sandbox schemes including the role and forms of sectoral incentives for participants, how knowledge and conceptions of risk are shared and the potential role of participatory processes and stakeholders. I am drafting a forthcoming paper outlining a typology of incentives for existing regulatory sandbox schemes. I have included policy and wider sectoral stakeholders within my data collection to obtain perspectives regarding perceived utility, understandings, and conceptions of sandbox schemes. Incorporating collaborative processes and inclusive engagement with affected stakeholders is a key principle of anticipatory regulation [4]. The role and extent of such engagement within proposed sandbox schemes for AI is a further dimension of my research to consider how such processes may be developed and operationalised. This work is undertaken at a time of rapid progression within AI systems and in the development of proposed AI regulation and varied forms of decentralised AI governance. I hope that my research will provide understanding of the utility, and potential limitations, of sandboxes as a regulatory tool drawing upon data from existing practices. My work may also impact existing policy discussions around the role of sandbox schemes as risk assessment and information monitoring tools for regulators.
CITATION STYLE
Morgan, D. (2023). Anticipatory regulatory instruments for AI systems: A comparative study of regulatory sandbox schemes. In AIES 2023 - Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (pp. 980–981). Association for Computing Machinery, Inc. https://doi.org/10.1145/3600211.3604732
Mendeley helps you to discover research relevant for your work.