Evaluating and Calibrating Uncertainty Prediction in Regression Tasks

68Citations
Citations of this article
115Readers
Mendeley users who have this article in their library.

Abstract

Predicting not only the target but also an accurate measure of uncertainty is important for many machine learning applications, and in particular, safety-critical ones. In this work, we study the calibration of uncertainty prediction for regression tasks which often arise in real-world systems. We show that the existing definition for the calibration of regression uncertainty has severe limitations in distinguishing informative from non-informative uncertainty predictions. We propose a new definition that escapes this caveat and an evaluation method using a simple histogram-based approach. Our method clusters examples with similar uncertainty prediction and compares the prediction with the empirical uncertainty on these examples. We also propose a simple, scaling-based calibration method that preforms as well as much more complex ones. We show results on both a synthetic, controlled problem and on the object detection bounding-box regression task using the COCO and KITTI datasets.

Author supplied keywords

Cite

CITATION STYLE

APA

Levi, D., Gispan, L., Giladi, N., & Fetaya, E. (2022). Evaluating and Calibrating Uncertainty Prediction in Regression Tasks. Sensors, 22(15). https://doi.org/10.3390/s22155540

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free