Most change detection (CD) methods assume that prechange and postchange images are acquired by the same sensor. However, in many real-life scenarios, e.g., natural disasters, it is more practical to use the latest available images before and after the occurrence of incidence, which may be acquired using different sensors. In particular, we are interested in the combination of the images acquired by optical and synthetic aperture radar (SAR) sensors. SAR images appear vastly different from the optical images even when capturing the same scene. Adding to this, CD methods are often constrained to use only target image-pair, no labeled data, and no additional unlabeled data. Such constraints limit the scope of traditional supervised machine learning and unsupervised generative approaches for multisensor CD. The recent rapid development of self-supervised learning methods has shown that some of them can even work with only few images. Motivated by this, in this work, we propose a method for multisensor CD using only the unlabeled target bitemporal images that are used for training a network in a self-supervised fashion by using deep clustering and contrastive learning. The proposed method is evaluated on four multimodal bitemporal scenes showing change, and the benefits of our self-supervised approach are demonstrated. Code is available at https://gitlab.lrz.de/ai4eo/cd/-/tree/main/sarOpticalMultisensorTgrs2021.
CITATION STYLE
Saha, S., Ebel, P., & Zhu, X. X. (2022). Self-Supervised Multisensor Change Detection. IEEE Transactions on Geoscience and Remote Sensing, 60. https://doi.org/10.1109/TGRS.2021.3109957
Mendeley helps you to discover research relevant for your work.