Responses to the Journey to the Singularity

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This chapter surveys various responses that have been made to the possibility of Artificial General Intelligence (AGI) possibly posing a catastrophic risk to humanity. The recommendations given for dealing with the problem can be divided into proposals for societal action, external constraints, and internal constraints. Proposals for societal action range from ignoring the issue entirely to enacting regulation to banning AGI entirely. Proposals for external constraints involve different ways of constraining and limiting the power of AGIs from the outside. Finally, proposals for internal constrains involve building AGIs in specific ways so as to make them safe. Many proposals seem to suffer from serious problems, or seem to be of limited effectiveness. Others seem to have enough promise to be worth exploring. We conclude by reviewing the proposals which we feel are worthy of further study. In the short term, these are regulation, merging with machines, AGI confinement, and AGI designs which make them easier to be controlled from the outside. In the long term, the most promising proposals are value learning and building the AGI systems to be human-like.

Cite

CITATION STYLE

APA

Sotala, K., & Yampolskiy, R. (2017). Responses to the Journey to the Singularity. In Frontiers Collection (Vol. Part F976, pp. 25–83). Springer VS. https://doi.org/10.1007/978-3-662-54033-6_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free