AI in Public Governance: Ensuring Rights and Innovation in Non-High-Risk AI Systems in the United States

  • Tasriqul Islam
  • Sadia Afrin
  • Neda Zand
N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Purpose: The area of artificial intelligence (AI) is one of the most rapidly developing areas in IT. This paper aims to contribute to the ongoing effort to create an AI governance framework that takes public confidence in AI policy into account. The article begins by talking about how important public trust is for the proper regulation of new technologies. Subsequently, it assesses public sentiment on AI technology as it relates to governmental functions. Materials and Methods: Researchers have looked at how people in the US feel about AI, how it's being used, and whether it's suitable for public administration tasks to use AI. Findings: According to the findings, people have different opinions on whether AI is acceptable and if its judgments impact the job market, the justice system, and national security in the long run. The 2018 AI Public Opinion Survey found that while many Americans are worried about AI, many also see its potential. Implications to Theory, Practice and Policy: Public trust is fundamental to effective AI governance, as discussed in the article's conclusion.

Cite

CITATION STYLE

APA

Tasriqul Islam, Sadia Afrin, & Neda Zand. (2024). AI in Public Governance: Ensuring Rights and Innovation in Non-High-Risk AI Systems in the United States. European Journal of Technology, 8(6), 17–27. https://doi.org/10.47672/ejt.2577

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free