ModelWise: Interactive Model Comparison for Model Diagnosis, Improvement and Selection

9Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Model comparison is an important process to facilitate model diagnosis, improvement, and selection when multiple models are developed for a classification task. It involves careful comparison concerning model performance and interpretation. Current visual analytics solutions often ignore the feature selection process. They either do not support detailed analysis of multiple multi-class classifiers or rely on feature analysis alone to interpret model results. Understanding how different models make classification decisions, especially classification disagreements of the same instances, requires a deeper model understanding. We present ModelWise, a visual analytics method to compare multiple multi-class classifiers in terms of model performance, feature space, and model explanation. ModelWise adapts visualizations with rich interactions to support multiple workflows to achieve model diagnosis, improvement, and selection. It considers feature subspaces generated for use in different models and improves model understanding by model explanation. We demonstrate the usability of ModelWise with two case studies, one with a small exemplar dataset and another developed with a machine learning expert with real-world perioperative data.

Cite

CITATION STYLE

APA

Meng, L., van den Elzen, S., & Vilanova, A. (2022). ModelWise: Interactive Model Comparison for Model Diagnosis, Improvement and Selection. Computer Graphics Forum, 41(3), 97–108. https://doi.org/10.1111/cgf.14525

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free