Designing ethical AI in the shadow of Hume’s guillotine

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Artificially intelligent systems can collect knowledge regarding epistemic information, but can they be used to derive new values? Epistemic information concerns facts, including how things are in the world, and ethical values concern how actions should be taken. The operation of artificial intelligence (AI) is based on facts, but it require values. A critical question here regards Hume’s Guillotine, which claims that one cannot derive values from facts. Hume’s Guillotine appears to divide AI systems into two ethical categories: weak and strong. Ethically weak AI systems can be applied only within given value rules, but ethically strong AI systems may be able to generate new values from facts. If Hume is correct, ethically strong AI systems are impossible, but there are, of course, no obstacles to designing ethically weak AI systems.

Author supplied keywords

Cite

CITATION STYLE

APA

Saariluoma, P., & Leikas, J. (2020). Designing ethical AI in the shadow of Hume’s guillotine. In Advances in Intelligent Systems and Computing (Vol. 1131 AISC, pp. 594–599). Springer. https://doi.org/10.1007/978-3-030-39512-4_92

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free