Governance, risk, and artificial intelligence

25Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Artificial intelligence, whether embodied as robots or Internet of Things, or disembodied as intelligent agents or decision-support systems, can enrich the human experience. It will also fail and cause harms, including physical injury and financial loss as well as more subtle harms such as instantiating human bias or undermining individual dignity. These failures could have a disproportionate impact because strange, new, and unpredictable dangers may lead to public discomfort and rejection of artificial intelligence. Two possible approaches to mitigating these risks are the hard power of regulating artificial intelligence, to ensure it is safe, and the soft power of risk communication, which engages the public and builds trust. These approaches are complementary and both should be implemented as artificial intelligence becomes increasingly prevalent in daily life.

Cite

CITATION STYLE

APA

Mannes, A. (2020). Governance, risk, and artificial intelligence. AI Magazine, 41(1), 61–69. https://doi.org/10.1609/aimag.v41i1.5200

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free