Overview and commentary of the CDEI's extended roadmap to an effective AI assurance ecosystem

6Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

In recent years, the field of ethical artificial intelligence (AI), or AI ethics, has gained traction and aims to develop guidelines and best practices for the responsible and ethical use of AI across sectors. As part of this, nations have proposed AI strategies, with the UK releasing both national AI and data strategies, as well as a transparency standard. Extending these efforts, the Centre for Data Ethics and Innovation (CDEI) has published an AI Assurance Roadmap, which is the first of its kind and provides guidance on how to manage the risks that come from the use of AI. In this article, we provide an overview of the document's vision for a “mature AI assurance ecosystem” and how the CDEI will work with other organizations for the development of regulation, industry standards, and the creation of AI assurance practitioners. We also provide a commentary of some key themes identified in the CDEI's roadmap in relation to (i) the complexities of building “justified trust”, (ii) the role of research in AI assurance, (iii) the current developments in the AI assurance industry, and (iv) convergence with international regulation.

Cite

CITATION STYLE

APA

Barrance, E., Kazim, E., Hilliard, A., Trengove, M., Zannone, S., & Koshiyama, A. (2022, August 10). Overview and commentary of the CDEI’s extended roadmap to an effective AI assurance ecosystem. Frontiers in Artificial Intelligence. Frontiers Media S.A. https://doi.org/10.3389/frai.2022.932358

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free