Self-Supervised Learning for Semantic Segmentation of Archaeological Monuments in DTMs

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Deep learning models need a lot of labeled data to work well. In this study, we use a Self-Supervised Learning (SSL) method for semantic segmentation of archaeological monuments in Digital Terrain Models (DTMs). This method first uses unlabeled data to pretrain a model (pretext task), and then fine-tunes it with a small labeled dataset (downstream task). We use unlabeled DTMs and Relief Visualizations (RVs) to train an encoder-decoder and a Generative Adversarial Network (GAN) in the pretext task and an annotated DTM dataset to fine-tune a semantic segmentation model in the downstream task. Experiments indicate that this approach produces better results than training from scratch or using models pretrained on image data like ImageNet. The code and pretrained weights for the encoder-decoder and the GAN models are made available on Github.1

Cite

CITATION STYLE

APA

Kazimi, B., & Sester, M. (2023). Self-Supervised Learning for Semantic Segmentation of Archaeological Monuments in DTMs. Journal of Computer Applications in Archaeology, 6(1), 155–173. https://doi.org/10.5334/jcaa.110

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free