Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems

5Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they are presumptuous. After elaborating this moral concern, I explore the possibility that carefully procuring the training data for image recognition systems could ensure that the systems avoid the problem. The lesson of this paper extends beyond just the particular case of image recognition systems and the challenge of responsibly identifying a person’s intentions. Reflection on this particular case demonstrates the importance (as well as the difficulty) of evaluating machine learning systems and their training data from the standpoint of moral considerations that are not encompassed by ordinary assessments of predictive accuracy.

Cite

CITATION STYLE

APA

King, O. C. (2019). Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems. In Philosophical Studies Series (Vol. 134, pp. 265–282). Springer Nature. https://doi.org/10.1007/978-3-030-01800-9_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free