Risks and mitigation strategies for oracle AI

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There is no strong reason to believe human level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed goals or motivation systems. Oracle AIs (OAI), confined AIs that can only answer questions, are one particular approach to this problem. However even Oracles are not particularly safe: humans are still vulnerable to traps, social engineering, or simply becoming dependent on the OAI. But OAIs are still strictly safer than general AIs, and there are many extra layers of precautions we can add on top of these. This paper looks at some of them and analyses their strengths and weaknesses.

Cite

CITATION STYLE

APA

Armstrong, S. (2013). Risks and mitigation strategies for oracle AI. In Studies in Applied Philosophy, Epistemology and Rational Ethics (Vol. 5, pp. 335–347). Springer International Publishing. https://doi.org/10.1007/978-3-642-31674-6_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free