The aim of this article is to explore the fundamental attributes of Artificial Intelligence (AI) in order to understand how AI can become biased or misinformed. Through this foundational context, we can consider the potential societal implications of biased AI. Analysis of the current development, use, and regulation of AI applications highlight the unique opportunity for government intervention in embedding safeguards against bias at each step of the AI development lifecycle. These conclusions were developed through assessing comparative legislation within the United States and with other regions, conversations with field experts, existing publications, and literature from academic, government, and non-government organizations. Executive Summary The digital era-with exponential technological innovation coinciding with largely unbounded globalization-is perpetually in uniquely unprecedented times. This poses a challenge for governing bodies; how do we govern without precedent? One method for identifying innovative solutions to complex problems is by use of Artificial Intelligence (AI). During the COVID-19 pandemic, political stakeholders raced to employ technological applications in the prevention and mitigation of the effects of COVID outbreaks within their jurisdictions. The global COVID pandemic is just one of many causes behind the continuous growth in AI for the public sector. This paper seeks to understand how and why AI is so popular as a problem-solving tool, and what ethical risks arise from rapid AI adoption. To generate actionable insights from such a multifaceted issue, we will focus on applications of AI with significant impact to human society, and the public sector's obligation to AI Governance. AI Governance will be used throughout this paper to mean a methodology for holding AI accountable, ethical, and effective for their intended purpose and audience. The need to prevent potentially biased AI is more pertinent than ever, as more and more of our everyday lives become intertwined with, and even influenced by AI. AI is being utilized in the public sector all over the world, most prominently in automating menial but time-consuming tasks. However, utilizing AI to inform complex decisions is also growing in popularity. This advanced technology requires a high level of specialized knowledge to build it, so AI is commonly built and purchased as software through private third parties. Despite common perception, AI cannot make decisions, but rather generate insights from patterns in a data set. Nonetheless, many AI models are utilized by the public and private sector as if they have the capability to draw comprehensive conclusions. Because AI is so new, many of the individuals implementing and regulating this technology do not have the specialized knowledge necessary to effectively safeguard AI models for bias or ethicality. AI bias can often be attributed to the creators of the AI model or dataset. However, bias can also be Isley 2 introduced in many ways within each of the many steps of building, implementing, and regulating an AI model. While there are specific policies and task forces working on AI initiatives in the U.S., the AI regulatory and risk field is significantly underdeveloped. Furthermore, bias is often introduced to AI through existing systemic bias in the U.S., such as unequal representation of minorities in STEM fields or datasets populated by biased policies. While AI can never be completely eradicated of harmful bias, more comprehensive policies regarding AI ethics could be a pivotal step towards U.S. governments utilizing AI for good.
CITATION STYLE
Isley, R. (2022). Algorithmic Bias and Its Implications: How to Maintain Ethics through AI Governance. N.Y.U. American Public Policy Review, 2(1). https://doi.org/10.21428/4b58ebd1.0e834dbb
Mendeley helps you to discover research relevant for your work.