The Global Governance of Artificial Intelligence: Some Normative Concerns

20Citations
Citations of this article
65Readers
Mendeley users who have this article in their library.

Abstract

The creation of increasingly complex artificial intelligence (AI) systems raises urgent questions about their ethical and social impact on society. Since this impact ultimately depends on political decisions about normative issues, political philosophers can make valuable contributions by addressing such questions. Currently, AI development and application are to a large extent regulated through non-binding ethics guidelines penned by transnational entities. Assuming that the global governance of AI should be at least minimally democratic and fair, this paper sets out three desiderata that an account should satisfy when theorizing about what this means. We argue, first, that an analysis of democratic values, political entities and decision-making should be done in a holistic way; second, that fairness is not only about how AI systems treat individuals, but also about how the benefits and burdens of transformative AI are distributed; and finally, that justice requires that governance mechanisms are not limited to AI technology, but are incorporated into a range of basic institutions. Thus, rather than offering a substantive theory of democratic and fair AI governance, our contribution is metatheoretical: we propose a theoretical framework that sets up certain normative boundary conditions for a satisfactory account.

Cite

CITATION STYLE

APA

Erman, E., & Furendal, M. (2022). The Global Governance of Artificial Intelligence: Some Normative Concerns. Moral Philosophy and Politics, 9(2), 267–291. https://doi.org/10.1515/mopp-2020-0046

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free