Unsupervised change detection based on image reconstruction loss with segment anything

2Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In remote sensing, change detection based on deep learning shows promising performance. However, collecting multi-temporal paired images for training a change detection model is costly. To solve this problem, unsupervised change detection methodologies have been proposed, but their performance is still low. In this article, we introduce the Unsupervised Change Detection Based on Image Reconstruction Loss (CDRL) and CDRL with Segment Anything (CDRL-SA). Requiring only single-temporal unlabelled images for training, CDRL uses the original and a photometrically transformed image as an unchanged pair, input to a bi-temporal transformer-based network for reconstructing the original image. During inference, changed pairs result in significant reconstruction loss, highlighting change areas. To capture finer details, we change the structure of CDRL to a transformer-based model and introduce the CutSwap method for effective training. Furthermore, this output is fused with the results of a recently proposed Segment Anything (SA) model to improve the final output. We assessed the performance of CDRL and CDRL-SA using the LEVIR change detection dataset and CLCD dataset, and the method achieved competitive results of 88.9 and 91.6 ACC for the respective datasets, demonstrating its effectiveness in unsupervised change detection tasks.

Cite

CITATION STYLE

APA

Noh, H. C., Ju, J. G., Kim, Y. H., Kim, M. W., & Choi, D. G. (2024). Unsupervised change detection based on image reconstruction loss with segment anything. Remote Sensing Letters, 15(9), 919–929. https://doi.org/10.1080/2150704X.2024.2388851

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free