Utilize Spatial Prior in Ground Truth: Spatial-Enhanced Loss for Semantic Segmentation

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Most supervised semantic segmentation methods to date choose cross-entropy loss (CE) as the default choice. Standard CE treats all pixels in the image indiscriminately, which lacks consideration of context differences between pixels, leading to the model being overwhelmed by numerous homogeneous pixels in large-scale objects. It ignores an essential spatial prior that can be deduced from Ground Truth-the segmentation edges, which can be practical to distinguish the excessive homogeneous pixels. Therefore, we propose a novel loss function termed Spatial-enhanced Loss (SL), in which the image is spatially separated into the edge region and the body region with the assistance of the edge derived from Ground Truth. Experiments evidence that SL has impressive superiority over Focal Loss, standard cross-entropy loss, class-balanced cross-entropy loss and Dice Loss. We achieve substantial improvements on multiple models without using any tricks, up to 1.60% mIoU.

Cite

CITATION STYLE

APA

Zhang, Y., Liu, F., & Tang, Q. (2022). Utilize Spatial Prior in Ground Truth: Spatial-Enhanced Loss for Semantic Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13531 LNCS, pp. 312–321). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-15934-3_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free