Measuring interpretability for different types of machine learning models

7Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The interpretability of a machine learning model plays a significant role in practical applications, thus it is necessary to develop a method to compare the interpretability for different models so as to select the most appropriate one. However, model interpretability, a highly subjective concept, is difficult to be accurately measured, not to mention the interpretability comparison of different models. To this end, we develop an interpretability evaluation model to compute model interpretability and compare interpretability for different models. Specifically, first we we present a general form of model interpretability. Second, a questionnaire survey system is developed to collect information about users’ understanding of a machine learning model. Next, three structure features are selected to investigate the relationship between interpretability and structural complexity. After this, an interpretability label is build based on the questionnaire survey result and a linear regression model is developed to evaluate the relationship between the structural features and model interpretability. The experiment results demonstrate that our interpretability evaluation model is valid and reliable to evaluate the interpretability of different models.

Cite

CITATION STYLE

APA

Zhou, Q., Liao, F., Mou, C., & Wang, P. (2018). Measuring interpretability for different types of machine learning models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11154 LNAI, pp. 295–308). Springer Verlag. https://doi.org/10.1007/978-3-030-04503-6_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free