Remarks on the possibility of ethical reasoning in an artificial intelligence system by means of abductive models

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine learning and other types of AI algorithms are now commonly used to make decisions about important personal situations. Institutions use such algorithms to help them figure out whether a person should get a job, receive a loan or even be granted parole, sometimes leaving the decision completely to an automatic process. Unfortunately, these algorithms can easily become biased and make unjust decisions. To avoid such problems, researchers are working to include an ethical framework in automatic decision systems. A well-known example is MIT’s Moral Machine, which is used to extract the basic ethical intuitions underlying extensive interviews with humans in order to apply them to the design of ethical autonomous vehicles. In this chapter, we want to show the limitations of current statistical methods based on preferences, and defend the use of abductive reasoning as a systematic tool for assigning values to possibilities and generating sets of ethical regulations for autonomous systems.

Cite

CITATION STYLE

APA

Sans, A., & Casacuberta, D. (2019). Remarks on the possibility of ethical reasoning in an artificial intelligence system by means of abductive models. In Studies in Applied Philosophy, Epistemology and Rational Ethics (Vol. 49, pp. 318–333). Springer International Publishing. https://doi.org/10.1007/978-3-030-32722-4_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free