There is no strong reason to believe human level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed goals or motivation systems. Oracle AIs (OAI), confined AIs that can only answer questions, are one particular approach to this problem. However even Oracles are not particularly safe: humans are still vulnerable to traps, social engineering, or simply becoming dependent on the OAI. But OAIs are still strictly safer than general AIs, and there are many extra layers of precautions we can add on top of these. This paper looks at some of them and analyses their strengths and weaknesses.
CITATION STYLE
Armstrong, S. (2013). Risks and mitigation strategies for oracle AI. In Studies in Applied Philosophy, Epistemology and Rational Ethics (Vol. 5, pp. 335–347). Springer International Publishing. https://doi.org/10.1007/978-3-642-31674-6_25
Mendeley helps you to discover research relevant for your work.