Feature Store for Enhanced Explainability in Support Ticket Classification

N/ACitations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In order to maximize trust between human and ML agents in an ML application scenario, humans need to be able to easily understand the reasoning behind predictions made by the black box models commonly used today. The field of explainable AI aims to maximize this trust. To achieve this, model interpretations need to be informative yet understandable. But often, explanations provided by a model are not easy to understand due to complex feature transformations. Our work proposes the use of a feature store to address this issue. We extend the general idea of a feature store. In addition to using a feature store for reading pre-processed features, we also use it to interpret model explanations in a more user-friendly and business-relevant format. This enables both the end user as well as the data scientist personae to glean more information from the interpretations in a shorter time. We demonstrate our idea using a service ticket classification scenario. However, the general concept can be extended to other data types and applications as well to gain more insightful explanations.

Cite

CITATION STYLE

APA

Mour, V., Dey, S., Jain, S., & Lodhe, R. (2020). Feature Store for Enhanced Explainability in Support Ticket Classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12431 LNAI, pp. 467–478). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-60457-8_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free