ALEX: Active Learning based Enhancement of a Classification Model's EXplainability

5Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An active learning (AL) algorithm seeks to construct an effective classifier with a minimal number of labeled examples in a bootstrapping manner. While standard AL heuristics, such as selecting those points for annotation for which a classification model yields least confident predictions, there has been no empirical investigation to see if these heuristics lead to models that are more interpretable to humans. In the era of data-driven learning, this is an important research direction to pursue. This paper describes our work-in-progress towards developing an AL selection function that in addition to model effectiveness also seeks to improve on the interpretability of a model during the bootstrapping steps. Concretely speaking, our proposed selection function trains an 'explainer' model in addition to the classifier model, and favours those instances where a different part of the data is used, on an average, to explain the predicted class. Initial experiments exhibited encouraging trends in showing that such a heuristic can lead to developing more effective and more explainable end-to-end data-driven classifiers.

Cite

CITATION STYLE

APA

Mondal, I., & Ganguly, D. (2020). ALEX: Active Learning based Enhancement of a Classification Model’s EXplainability. In International Conference on Information and Knowledge Management, Proceedings (pp. 3309–3312). Association for Computing Machinery. https://doi.org/10.1145/3340531.3417456

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free