Digital Compliance: The Case for Algorithmic Transparency

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Together with their undeniable advantages, the new technologies of the Fintech Revolution bring new risks. Some of these risks are already known but have taken on a new form; some are entirely new. Among the latter, one of the most relevant concerns the opacity of artificial intelligence (AI). This lack of transparency generates questions not only about measuring the correctness and efficiency of the choices made by the algorithm, but also about the impact of these choices on third parties. There is, therefore, an issue of the legitimacy of the decision thus made: its opacity makes it arbitrary and insensitive to the rights of third parties affected by the choice. Thus it is essential to understand what level of explanation is needed in order to allow the use of the algorithm. Focusing on the AI transparency issue, there are grounds for believing that, at least in the EU, the costs deriving from a lack of transparency cannot be passed on to third parties and must instead be managed inside the enterprise. Therefore, the task of the enterprise, its directors, and in particular its compliance function must be dynamic, taking into account all foreseeable AI risks.

Cite

CITATION STYLE

APA

Mozzarelli, M. (2021). Digital Compliance: The Case for Algorithmic Transparency. In Corporate Compliance on a Global Scale: Legitimacy and Effectiveness (pp. 259–284). Springer International Publishing. https://doi.org/10.1007/978-3-030-81655-1_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free