TSInterpret: A Python Package for the Interpretability of Time Series Classification

  • Höllig J
  • Kulbach C
  • Thoma S
N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

TSInterpret is a python package that enables post-hoc interpretability and explanation of black-box time series classifiers with three lines of code. Due to the specific structure of time series (i.e., non-independent features (Ismail et al., 2020)) and unintuitive visualizations (Siddiqui et al., 2019), traditional interpretability and explainability libraries like Captum (Kokhlikyan et al., 2020), Alibi Explain (Klaise et al., 2021), or tf-explain (Meudec, 2021) find limited usage. TSInterpret specifically addresses the issue of black-box time series classification by providing a unified interface to state-of-the-art interpretation algorithms in combination with default plots. In addition, the package provides a framework for developing additional easy-to-use interpretability methods. Statement of need Temporal data is ubiquitous and encountered in many real-world applications ranging from electronic health records (Rajkomar et al., 2018) to cyber security (Susto et al., 2018). Although deep learning methods have been successful in the field of Computer Vision (CV) and Natural Language Processing (NLP) for almost a decade, application on time series data has only occurred in the past few years (e.g., Fawaz et al., 2019; Rajkomar et al., 2018; Ruiz et al., 2021; Susto et al., 2018). Deep learning models have achieved state-of-the-art results on time series classification (e.g., Fawaz et al., 2019). However, those methods are black boxes due to their complexity which limits their application to high-stake scenarios (e.g., in medicine or autonomous driving), where user trust and understandability of the decision process are crucial. In such scenarios, post-hoc interpretability is useful as it enables the analysis of already trained models without model modification. Much work has been done on post-hoc interpretability in CV and NLP, but most developed approaches are not directly applicable to time series data. The time component impedes the usage of existing methods (Ismail et al., 2020). Thus, increasing effort is put into adapting existing methods to time series (e.g., LEFTIST based on SHAP/Lime (Guillemé et al., 2019), Temporal Saliency Rescaling for Saliency Methods (Ismail et al., 2020), or Counterfactuals (Ates et al., 2021; Delaney et al., 2021; Höllig et al., 2022)). Compared to images or textual data, humans cannot intuitively and instinctively understand the underlying information in time series data. Therefore, time series data, both uni-and multivariate, have an unintuitive nature, lacking an understanding at first sight (Siddiqui et al., 2019). Hence, providing suitable visualizations of time series interpretability becomes crucial. Features Explanations can take various form (see Figure 1). Different use cases or users need different types of explanations. While for a domain expert, counterfactuals are useful, a data scientist or machine learning engineer prefers gradient-based approaches (Ismail et al., 2020) to evaluate Höllig et al. (2023). TSInterpret: A Python Package for the Interpretability of Time Series Classification. Journal of Open Source Software, 8(85), 5220. https://doi.org/10.21105/joss.05220. 1 the model's feature attribution. Figure 1: Explanations. Counterfactual approaches calculate counterexamples by finding a time series close to the original time series that is classified differently, thereby showing decision boundaries. The intuition is to answer the question 'What if?'. TSInterpret implements Ates et al. (2021), a perturbation-based approach for multivariate data, Delaney et al. (2021) for univariate time series, and Höllig et al. (2022) an evolutionary based approach applicable to uni-and multivariate data. Gradient-based approaches (e.g., GradCam) were adapted to time series by Ismail et al. (2020) who proposed rescaling according to time step importance and feature importance. This is applicable to both gradient and perturbation-based methods and based on tf-explain (Meudec, 2021) and captum (Kokhlikyan et al., 2020). LEFTIST by Guillemé et al. (2019) calculates feature importance based on a variety of Lime based on shapelets.

Cite

CITATION STYLE

APA

Höllig, J., Kulbach, C., & Thoma, S. (2023). TSInterpret: A Python Package for the Interpretability of Time Series Classification. Journal of Open Source Software, 8(85), 5220. https://doi.org/10.21105/joss.05220

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free