Levels of Abstraction and Morality

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this chapter, I examine the work of Floridi (with Sanders) on the notion of Levels of Abstraction (LoA) and its importance for the morality of artificial agents. I critique their attempt to characterise artificial agents specifically (and systems generally) as moral agents through the use of LoA, threshold functions, and computer systems concepts such as state transitions and interactivity. I do this by first examining their notion of morality and then their notion of agency, particularly contrasting agents versus patient and the agent as system. Essentially, they view moral agents as systems viewed through a particular LoA; this moral level of abstraction they specify as LoA2. Their use of interactivity, autonomy, and adaptability is criticised and difficulties are noted. To cache out levels of abstraction, they give several examples. These, I claim, are particularly problematic. I then provide a systematic and comprehensive table of the relationships between interaction, autonomy, and adaptability to suggest where these relationships might be strengthened. Finally, I take issue with the notion of natural LoAs, claiming that there are no natural LoAs. In the end, I conclude that the construction of LoA2 is too artificial and too simple to count as a natural characterisation of morality.

Cite

CITATION STYLE

APA

Lucas, R. (2012). Levels of Abstraction and Morality. In Philosophy of Engineering and Technology (Vol. 8, pp. 43–63). Springer Nature. https://doi.org/10.1007/978-94-007-4292-5_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free