Poisonous Datasets, Poisonous Trees

  • Grant T
  • Wischik D
N/ACitations
Citations of this article
1Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Machine learning gives rise to concerns about “algorithmic bias,” arising from bias in the training dataset. A dataset necessarily reflects a past state of affairs, but we may anticipate that the future will be different, or desire it to be so. Law has struggled with an analogous problem: how to deal with “bad evidence” and prejudice. Holmes’s broad view of experience suggests that once undesirable data has been revealed, its potential for mischief is there, and it is futile to pretend it does not exist. Instead, law has developed several strategies to deal with these negative influences. Under the doctrine of the “fruit of the poisonous tree,” in American jurisprudence, evidence that stems from improper actions by public authorities is inadmissible in court. Or, a judge can give instructions to the jury to guard against improper inferences. Or, a jury verdict may be struck down on appeal. Strategies analogous to these might help to make machine learning a tool for better outcomes, rather than a trap that entrenches past mistakes and prejudice.

Cite

CITATION STYLE

APA

Grant, T. D., & Wischik, D. J. (2020). Poisonous Datasets, Poisonous Trees. In On the path to AI (pp. 89–101). Springer International Publishing. https://doi.org/10.1007/978-3-030-43582-0_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free