Rational universal benevolence: Simpler, safer, and wiser than "friendly AI"

13Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Insanity is doing the same thing over and over and expecting a different result. "Friendly AI" (FAI) meets these criteria on four separate counts by expecting a good result after: 1) it not only puts all of humanity's eggs into one basket but relies upon a totally new and untested basket, 2) it allows fear to dictate our lives, 3) it divides the universe into us vs. them, and finally 4) it rejects the value of diversity. In addition, FAI goal initialization relies on being able to correctly calculate a "Coherent Extrapolated Volition of Humanity" (CEV) via some as-yet-undiscovered algorithm. Rational Universal Benevolence (RUB) is based upon established game theory and evolutionary ethics and is simple, safe, stable, self-correcting, and sensitive to current human thinking, intuitions, and feelings. Which strategy would you prefer to rest the fate of humanity upon? © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Waser, M. (2011). Rational universal benevolence: Simpler, safer, and wiser than “friendly AI.” In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6830 LNAI, pp. 153–162). https://doi.org/10.1007/978-3-642-22887-2_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free