Abstract
The new era of image segmentation leveraging the power of Deep Neural Nets (DNNs) comes with a price tag: to train a neural network for pixel-wise segmentation, a large amount of training samples has to be manually labeled on pixel-precision. In this work, we address this by following an indirect solution. We build upon the advances of the Explainable AI (XAI) community and extract a pixel-wise binary segmentation from the output of the Layer-wise Relevance Propagation (LRP) explaining the decision of a classification network. We show that we achieve similar results compared to an established U-Net segmentation architecture, while the generation of the training data is significantly simplified. The proposed method can be trained in a weakly supervised fashion, as the training samples must be only labeled on image-level, at the same time enabling the output of a segmentation mask. This makes it especially applicable to a wider range of real applications where tedious pixel-level labelling is often not possible.
Author supplied keywords
Cite
CITATION STYLE
Seibold, C., Künzel, J., Hilsmann, A., & Eisert, P. (2022). From Explanations to Segmentation: Using Explainable AI for Image Segmentation. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Vol. 4, pp. 616–626). Science and Technology Publications, Lda. https://doi.org/10.5220/0010893600003124
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.