This chapter introduces the term technological singularity, and analyses the varying and ambiguous ways it is used. It looks at the difficulty in predicting what would happen with “human comparable” artificial intelligences (AI), and what can nevertheless be said about such an occurence. The track record for predictions in AI by experts is a poor one, for solid theoretical reasons backed up with empirical prediction evidence. However, there are strong arguments implying that such an AI could become extremely powerful, though one or another of various plausible routes—not necessarily requiring the AI to be “superintelligent”, either. Then the chapter demonstrates that such an AI has a non-insignificant chance of doing dangerous for humanity as a whole. The difficulty in reasoning about this subject and the uncertainty surrounding it cannot be seen as excuses to do nothing—indeed a position that AI would be safe is a position of great overconfidence, far beyond what can be warranted by the evidence.
CITATION STYLE
Armstrong, S. (2017). Introduction to the Technological Singularity. In Frontiers Collection (Vol. Part F976, pp. 1–8). Springer VS. https://doi.org/10.1007/978-3-662-54033-6_1
Mendeley helps you to discover research relevant for your work.