Risks of the Journey to the Singularity

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. Unlike current AI systems, individual AGIs would be capable of learning to operate in a wide variety of domains, including ones they had not been specifically designed for. It has been proposed that AGIs might eventually pose a significant risk to humanity, for they could accumulate significant amounts of power and influence in society while being indifferent to what humans valued. The accumulation of power might either happen gradually over time, or it might happen very rapidly (a so-called “hard takeoff”). Gradual accumulation would happen through normal economic mechanisms, as AGIs came to carry out an increasing share of economic tasks. A hard takeoff could be possible if AGIs required significantly less hardware to run than was available, or if they could redesign themselves to run at ever faster speeds, or if they could repeatedly redesign themselves into more intelligent versions of themselves.

Cite

CITATION STYLE

APA

Sotala, K., & Yampolskiy, R. (2017). Risks of the Journey to the Singularity. In Frontiers Collection (Vol. Part F976, pp. 11–23). Springer VS. https://doi.org/10.1007/978-3-662-54033-6_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free