A reference process for judging reliability of classification results in predictive analytics

2Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Organizations employ data mining to discover patterns in historic data. The models that are learned from the data allow analysts to make predictions about future events of interest. Different global measures, e.g., accuracy, sensitivity, and specificity, are employed to evaluate a predictive model. In order to properly assess the reliability of an individual prediction for a specific input case, global measures may not suffice. In this paper, we propose a reference process for the development of predictive analytics applications that allow analysts to better judge the reliability of individual classification results. The proposed reference process is aligned with the CRISP-DM stages and complements each stage with a number of tasks required for reliability checking. We further explain two generic approaches that assist analysts with the assessment of reliability of individual predictions, namely perturbation and local quality measures.

Cite

CITATION STYLE

APA

Staudinger, S., Schuetz, C. G., & Schrefl, M. (2021). A reference process for judging reliability of classification results in predictive analytics. In Proceedings of the 10th International Conference on Data Science, Technology and Applications, DATA 2021 (pp. 124–134). SciTePress. https://doi.org/10.5220/0010620501240134

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free