Machine intentionality, the moral status of machines, and the composition problem

7Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

According to the most popular theories of intentionality, a family of theories we will refer to as “functional intentionality,” a machine can have genuine intentional states so long as it has functionally characterizable mental states that are causally hooked up to the world in the right way. This paper considers a detailed description of a robot that seems to meet the conditions of functional intentionality, but which falls victim to what I call “the composition problem.” One obvious way to escape the problem (arguably, the only way) is if the robot can be shown to be a moral patient – to deserve a particular moral status. If so, it isn’t clear how functional intentionality could remain plausible (something like “phenomenal intentionality” would be required). Finally, while it would have seemed that a reasonable strategy for establishing the moral status of intelligent machines would be to demonstrate that the machine possessed genuine intentionality, the composition argument suggests that the order of precedence is reversed: The machine must first be shown to possess a particular moral status before it is a candidate for having genuine intentionality.

Cite

CITATION STYLE

APA

Anderson, D. L. (2013). Machine intentionality, the moral status of machines, and the composition problem. In Studies in Applied Philosophy, Epistemology and Rational Ethics (Vol. 5, pp. 321–333). Springer International Publishing. https://doi.org/10.1007/978-3-642-31674-6_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free