The Artificial Moral Advisor. The “Ideal Observer” Meets Artificial Intelligence

72Citations
Citations of this article
170Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We describe a form of moral artificial intelligence that could be used to improve human moral decision-making. We call it the “artificial moral advisor” (AMA). The AMA would implement a quasi-relativistic version of the “ideal observer” famously described by Roderick Firth. We describe similarities and differences between the AMA and Firth’s ideal observer. Like Firth’s ideal observer, the AMA is disinterested, dispassionate, and consistent in its judgments. Unlike Firth’s observer, the AMA is non-absolutist, because it would take into account the human agent’s own principles and values. We argue that the AMA would respect and indeed enhance individuals’ moral autonomy, help individuals achieve wide and a narrow reflective equilibrium, make up for the limitations of human moral psychology in a way that takes conservatives’ objections to human bioenhancement seriously, and implement the positive functions of intuitions and emotions in human morality without their downsides, such as biases and prejudices.

Cite

CITATION STYLE

APA

Giubilini, A., & Savulescu, J. (2018). The Artificial Moral Advisor. The “Ideal Observer” Meets Artificial Intelligence. Philosophy and Technology, 31(2), 169–188. https://doi.org/10.1007/s13347-017-0285-z

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free