Abstract
Synthetic Aperture Radar (SAR) becomes prevailing in remote sensing while SAR images are challenging to interpret by human visual perception due to the active imaging mechanism and speckle noise. Recent researches on SAR-to-optical image translation provide a promising solution and have attracted increasing attention, though still suffering from low optical image quality with geometric distortion due to the large domain gap. In this paper, we mitigate this issue from a novel perspective, i.e., neural partial differential equations (PDE). First, based on the efficient numerical scheme for solving PDE, i.e., Taylor Central Difference (TCD), we devise a basic TCD residual block to build the backbone network, which promotes the extraction of useful information in SAR images by aggregating and enhancing features from different levels. Furthermore, inspired by the Perona-Malik Diffusion (PMD), we devise a PMD neural module to implement feature diffusion through layers, aiming at removing the noise in smooth regions while preserving the geometric structures. Assembling them together, we get a new SAR-to-Optical image translation network named S2O-NPDE, which delivers optical images with finer structures and less noise. Experiments on the popular SEN1-2 dataset show that S2O-NPDE outperforms state-of-the-art methods in both objective metrics and visual quality.
Cite
CITATION STYLE
Zhang, M., He, C., Zhang, J., Yang, Y., Peng, X., & Guo, J. (2022). SAR-to-Optical Image Translation via Neural Partial Differential Equations. In IJCAI International Joint Conference on Artificial Intelligence (pp. 1644–1650). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/229
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.