Ready but irresponsible? Analysis of the Government Artificial Intelligence Readiness Index

5Citations
Citations of this article
85Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Many are the promises of artificial intelligence (AI) and algorithms. Governments around the world are increasingly investing in AI and multiple voices have touted this seemingly unmatched revolution. Better performance, cost reduction, efficient management, crime prediction, and prevention are but a few of the pledges of the AI era. While such promises are recognized, research shows that AI benefits could be overstated. Issues of equity, ethics, justice, and fairness have raised concerns and have been seen as potentially threatening democratic principles. As countries get ready to tap into the AI power, researchers are asking whether preparedness is followed by responsibility checks. In this article, we use the Oxford Insights AI Readiness Index to explore why innovation and readiness in artificial intelligence are not always accompanied by accountability, even for some of the most advanced democracies around the world. Using the Fuzzy-Set Qualitative Comparative Analysis (fsQCA) approach, we show that advancement in AI is not enough: privacy, transparency, inclusion, and accountability principles are key to ensuring governments tackle the AI challenge responsibly.

Cite

CITATION STYLE

APA

Nzobonimpa, S., & Savard, J. F. (2023). Ready but irresponsible? Analysis of the Government Artificial Intelligence Readiness Index. Policy and Internet, 15(3), 397–414. https://doi.org/10.1002/poi3.351

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free