The Genie Knows, but Doesn't Care

  • RobbBB
N/ACitations
Citations of this article
2Readers
Mendeley users who have this article in their library.

Abstract

Summary: If an artificial intelligence is smart enough to be dangerous, we'd intuitively expect it to be smart enough to know how to make itself safe. But that doesn't mean all smart AIs are safe. To turn that capacity into actual safety, we have to program the AI at the outset — before it becomes too fast, powerful, or complicated to reliably control — to already care about making its future self care about safety. That means we have to understand how to code safety. We can't pass the entire buck to the AI, when only an AI we've already safety-proofed will be safe to ask for help on safety issues! Given the five theses, this is an urgent problem if we're likely to figure out how to make a decent artificial programmer before we figure out how to make an excellent artificial ethicist.

Cite

CITATION STYLE

APA

RobbBB. (2013, September 6). The Genie Knows, but Doesn’t Care. Retrieved from http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free