It has been argued that ethically correct robots should be able to reason about right and wrong. In order to do so, they must have a set of dotextquoterights and dontextquoterightts at their disposal. However, such a list may be inconsistent, incomplete or otherwise unsatisfactory, depending on the reasoning principles that one employs. For this reason, it might be desirable if robots were to some extent able to reason about their own reasoning--in other words, if they had some meta-ethical capacities. In this paper, we sketch how one might go about designing robots that have such capacities. We show that the field of computational meta-ethics can profit from the same tools as have been used in computational metaphysics
CITATION STYLE
Lokhorst, G.-J. C. (2011). Computational Meta-Ethics. Minds and Machines, 21(2), 261–274. https://doi.org/10.1007/s11023-011-9229-z
Mendeley helps you to discover research relevant for your work.