Going beyond One-Hot Encoding in Classification: Can Human Uncertainty Improve Model Performance in Earth Observation?

0Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Technological and computational advances continuously drive forward the field of deep learning in remote sensing. In recent years, the derivation of quantities describing the uncertainty in the prediction - which naturally accompanies the modeling process - has sparked interest in the remote sensing community. Often neglected in the machine learning setting is the human uncertainty that influences numerous labeling processes. As the core of this work, the task of local climate zone (LCZ) classification is studied by means of a dataset that contains multiple label votes by domain experts for each image. The inherent label uncertainty describes the ambiguity among the domain experts and is explicitly embedded into the training process via distributional labels. We show that incorporating the label uncertainty helps the model to generalize better to the test data and increases model performance. Similar to existing calibration methods, the distributional labels lead to better-calibrated probabilities, which in turn yield more certain and trustworthy predictions. For reproducibility, we provide our code here https://github.com/ChrisKo94/LCZ_LDL and here https://gitlab.lrz.de/ai4eo/WG_Uncertainty/lcz_ldl.

Cite

CITATION STYLE

APA

Koller, C., Kauermann, G., & Zhu, X. X. (2024). Going beyond One-Hot Encoding in Classification: Can Human Uncertainty Improve Model Performance in Earth Observation? IEEE Transactions on Geoscience and Remote Sensing, 62, 1–11. https://doi.org/10.1109/TGRS.2023.3336357

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free