Image translation between high-resolution remote sensing optical and SAR data using conditional GAN

13Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a study on a new problem: applying machine learning approaches to translate remote sensing images between high-resolution optical and Synthetic Aperture Radar (SAR) data. To this end, conditional Generative Adversarial Networks (GAN) have been explored. Efficiency of the conditional GAN have been verified with different SAR parameters on three regions from the world: Toronto, Vancouver in Canada and Shanghai in China. The generated SAR images have been evaluated by pixel-based image classification with detailed land cover types including: low and high density residential area, industry area, construction site, golf course, water, forest, pasture and crops. In comparison with an unsupervised GAN translation approach, the proposed conditional GAN could effectively keep many land cover types with compatible classification accuracy to the ground truth SAR data. This is one of first study on multi-source remote sensing data translation by machine learning.

Cite

CITATION STYLE

APA

Niu, X., Yang, D., Yang, K., Pan, H., & Dou, Y. (2018). Image translation between high-resolution remote sensing optical and SAR data using conditional GAN. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11166 LNCS, pp. 245–255). Springer Verlag. https://doi.org/10.1007/978-3-030-00764-5_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free