Artificial Intelligence and Human Rights: Corporate Responsibility in AI Governance Initiatives

2Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Private businesses are central actors in the development of artificial intelligence (AI), meaning they have a key role in ensuring that AI respects human rights. Meanwhile, international human rights law (IHRL) has been scrambling to catch up with technological developments that have occurred since the establishment of its state-centric framework that were not envisaged by its drafters. Despite progress in the development of international legal standards on business and human rights, uncertainties regarding the role and responsibilities of AI businesses remain. This article addresses these uncertainties from a governance perspective and against the backdrop of the public/private divide; it views laws as instruments of governance, which comprises activities by many public and private actors. Section 2 briefly assesses the current framework of IHRL regarding AI and businesses, focusing on the lack of legal certainty. Section 3 critically analyses AI initiatives beyond IHRL that have been adopted at international, regional, and national levels to gain insight into specific standards of behaviour expected of AI businesses, as well as to challenge a dichotomous public/private divide in this context. Section 4 provides conclusions and recommendations.

Cite

CITATION STYLE

APA

Lane, L. (2023). Artificial Intelligence and Human Rights: Corporate Responsibility in AI Governance Initiatives. Nordic Journal of Human Rights, 41(3), 304–325. https://doi.org/10.1080/18918131.2022.2137288

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free