Abstract
A Bayesian treatment of deep learning allows for the computation of uncertainties associated with the predictions of deep neural networks. We show how the concept of Errors-in-Variables can be used in Bayesian deep regression to also account for the uncertainty associated with the input of the employed neural network. The presented approach thereby exploits a relevant, but generally overlooked, source of uncertainty and yields a decomposition of the predictive uncertainty into an aleatoric and epistemic part that is more complete and, in many cases, more consistent from a statistical perspective. We discuss the approach along various simulated and real examples and observe that using an Errors-in-Variables model leads to an increase in the uncertainty while preserving the prediction performance of models without Errors-in-Variables. For examples with known regression function we observe that this ground truth is substantially better covered by the Errors-in-Variables model, indicating that the presented approach leads to a more reliable uncertainty estimation.
Author supplied keywords
Cite
CITATION STYLE
Martin, J., & Elster, C. (2023). Aleatoric Uncertainty for Errors-in-Variables Models in Deep Regression. Neural Processing Letters, 55(4), 4799–4818. https://doi.org/10.1007/s11063-022-11066-3
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.