This chapter surveys various responses that have been made to the possibility of Artificial General Intelligence (AGI) possibly posing a catastrophic risk to humanity. The recommendations given for dealing with the problem can be divided into proposals for societal action, external constraints, and internal constraints. Proposals for societal action range from ignoring the issue entirely to enacting regulation to banning AGI entirely. Proposals for external constraints involve different ways of constraining and limiting the power of AGIs from the outside. Finally, proposals for internal constrains involve building AGIs in specific ways so as to make them safe. Many proposals seem to suffer from serious problems, or seem to be of limited effectiveness. Others seem to have enough promise to be worth exploring. We conclude by reviewing the proposals which we feel are worthy of further study. In the short term, these are regulation, merging with machines, AGI confinement, and AGI designs which make them easier to be controlled from the outside. In the long term, the most promising proposals are value learning and building the AGI systems to be human-like.
CITATION STYLE
Sotala, K., & Yampolskiy, R. (2017). Responses to the Journey to the Singularity. In Frontiers Collection (Vol. Part F976, pp. 25–83). Springer VS. https://doi.org/10.1007/978-3-662-54033-6_3
Mendeley helps you to discover research relevant for your work.