On Measures of Uncertainty in Classification

7Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Uncertainty is unavoidable in classification tasks and might originate from data (e.g., due to noise or wrong labeling), or the model (e.g., due to erroneous assumptions, etc). Providing an assessment of uncertainty associated with each outcome is of paramount importance in assessing the reliability of classification algorithms, especially on unseen data. In this work, we propose two measures of uncertainty in classification. One of the measures is developed from a geometrical perspective and quantifies a classifier's distance from a random guess. In contrast, the second proposed uncertainty measure is homophily-based since it takes into account the similarity between the classes. Accordingly, it reflects the type of mistaken classes. The proposed measures are not aggregated, i.e., they provide an uncertainty assessment to each data point. Moreover, they do not require label information. Using several datasets, we demonstrate the proposed measures' differences and merit in assessing uncertainty in classification. The source code is available at github.com/pioui/uncertainty.

Cite

CITATION STYLE

APA

Chlaily, S., Ratha, D., Lozou, P., & Marinoni, A. (2023). On Measures of Uncertainty in Classification. IEEE Transactions on Signal Processing, 71, 3710–3725. https://doi.org/10.1109/TSP.2023.3322843

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free