Discovering the rationale of decisions: Towards a method for aligning learning and reasoning

17Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In AI and law, systems that are designed for decision support should be explainable when pursuing justice. In order for these systems to be fair and responsible, they should make correct decisions and make them using a sound and transparent rationale. In this paper, we introduce a knowledge-driven method for model-agnostic rationale evaluation using dedicated test cases, similar to unit-testing in professional software development. We apply this new quantitative human-in-the-loop method in a machine learning experiment aimed at extracting known knowledge structures from artificial datasets from a real-life legal setting. We show that our method allows us to analyze the rationale of black box machine learning systems by assessing which rationale elements are learned or not. Furthermore, we show that the rationale can be adjusted using tailor-made training data based on the results of the rationale evaluation.

Cite

CITATION STYLE

APA

Steging, C., Renooij, S., & Verheij, B. (2021). Discovering the rationale of decisions: Towards a method for aligning learning and reasoning. In Proceedings of the 18th International Conference on Artificial Intelligence and Law, ICAIL 2021 (pp. 235–239). Association for Computing Machinery, Inc. https://doi.org/10.1145/3462757.3466059

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free