imodels: a python package for fitting interpretable models

  • Singh C
  • Nasseri K
  • Tan Y
  • et al.
N/ACitations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

imodels is a Python package for concise, transparent, and accurate predictive modeling. It provides users a simple interface for fitting and using state-of-the-art interpretable models, all compatible with scikit-learn (Pedregosa et al., 2011). These models can often replace black-box models while improving interpretability and computational efficiency, all without sacrificing predictive accuracy. In addition, the package provides a framework for developing custom tools and rule-based models for interpretability. Statement of need Recent advancements in machine learning have led to increasingly complex predictive models, often at the cost of interpretability. There is often a need for models which are inherently interpretable (Murdoch et al., 2019; Rudin, 2019), particularly in high-stakes applications such as medicine, biology, and political science. In these cases, interpretability can ensure that models behave reasonably, identify when models will make errors, and make the models more trusted by domain experts. Moreover, interpretable models tend to be much more computationally efficient then larger black-box models. Despite the development of many methods for fitting interpretable models (Molnar, 2020), implementations for such models are often difficult to find, use, and compare to one another. imodels aims to fill this gap by providing a simple unified interface and implementation for many state-of-the-art interpretable modeling techniques. Features Interpretable models can take various forms. Figure 1 shows four possible forms a model in the imodels package can take. Each form constrains the final model in order to make it interpretable, but there are different methods for fitting the model which differ in their biases and computational costs. The imodels package contains implementations of various such methods and also useful functions for recombining and extending them. Rule sets consist of a set of rules which each act independently. There are different strategies for deriving a rule set, such as Skope-rules (Skope Collaboration, 2021) or Rulefit (Friedman * Equal contribution † Equal contribution Singh et al., (2021). imodels: a python package for fitting interpretable models. Journal of Open Source Software, 6(61), 3192. https: //doi.org/10.21105/joss.03192 1 et al., 2008). Rule lists are composed of a set of rules which act in sequence, and include models such as Bayesian rule lists (Letham et al., 2015) or the oneR algorithm (Holte, 1993). Rule trees are similar to rule lists, but allow branching after rules. This includes models such as CART decision trees (Breiman et al., 1984). Algebraic models take a final form of simple algebraic expressions, such as supersparse linear integer models (Ustun & Rudin, 2016). Figure 1: Examples of different supported model forms. The bottom of each box shows predictions of the corresponding model as a function of X1 and X2.

Cite

CITATION STYLE

APA

Singh, C., Nasseri, K., Tan, Y., Tang, T., & Yu, B. (2021). imodels: a python package for fitting interpretable models. Journal of Open Source Software, 6(61), 3192. https://doi.org/10.21105/joss.03192

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free