An Analysis of Barriers to Embedding Moral Principles in a Robot

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the difficulties to embed moral principles in a robot is the barriers due to moral principles themselves. The barriers are analysed from four specific topics. The first is about the difference between value judgments and fact judgments. The following analyses focus on the specific moral barriers resulted respectively from deontology, consequentialism, and virtue ethics. These analyses show that it is much harder than what we thought to make a moral robot. If we forget the reductionism, the method used commonly in moral philosophy, we should find a new direction to solve the problem. It is highly possible that human beings can find a better way to improve their own moral cultivation while they try to make moral machines.

Author supplied keywords

Cite

CITATION STYLE

APA

Yan, P. (2019). An Analysis of Barriers to Embedding Moral Principles in a Robot. In Advances in Intelligent Systems and Computing (Vol. 877, pp. 204–212). Springer Verlag. https://doi.org/10.1007/978-3-030-02116-0_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free