Model Explanation and Interpretation Concepts for Stimulating Advanced Human-Machine Interaction with “Expert-in-the-Loop”

  • Lughofer E
N/ACitations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose two directions for stimulating advanced human-machine interaction in machine learning systems. The first direction acts on a local level by suggesting a reasoning process why certain model decisions/predictions have been made for current sample queries. It may help to better understand how the model behaves and to support humans for providing more consistent and certain feedbacks. A practical example from visual inspection of production items underlines higher human labeling consistency. The second direction acts on a global level by addressing several criteria which are necessary for a good interpretability of the whole model. By meeting the criteria, the likelihood increases (1) of gaining more funded insights into the behavior of the system, and (2) of stimulating advanced expert/operators feedback in form of active manipulations of the model structure. Possibilities how to best integrate different types of advanced feedback in combination with (on-line) data using incremental model updates will be discussed. This leads to a new, hybrid interactive model building paradigm, which is based on subjective knowledge versus objective data and thus integrates the “expert-in-the-loop” aspect.

Cite

CITATION STYLE

APA

Lughofer, E. (2018). Model Explanation and Interpretation Concepts for Stimulating Advanced Human-Machine Interaction with “Expert-in-the-Loop” (pp. 177–221). https://doi.org/10.1007/978-3-319-90403-0_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free