Trusting My Predictions: On the Value of Instance-Level Analysis

0Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine Learning solutions have spread along many domains, including critical applications. The development of such models usually relies on a dataset containing labeled data. This dataset is then split into training and test sets and the accuracy of the models in replicating the test labels is assessed. This process is often iterated in a cross-validation procedure for obtaining average performance estimates. But is the average of the predictive performance on test sets enough for assessing the trustfulness of a Machine Learning model? This paper discusses the importance of knowing which individual observations of a dataset are more challenging than others and how this characteristic can be measured and used in order to improve classification performance and trustfulness. A set of strategies for measuring the hardness level of the instances of a dataset is surveyed and a Python package containing their implementation is provided.

Author supplied keywords

Cite

CITATION STYLE

APA

Lorena, A. C., Paiva, P. Y. A., & Prudêncio, R. B. C. (2024). Trusting My Predictions: On the Value of Instance-Level Analysis. ACM Computing Surveys, 56(7). https://doi.org/10.1145/3615354

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free