Deep Learning for Understanding Satellite Imagery: An Experimental Survey

64Citations
Citations of this article
103Readers
Mendeley users who have this article in their library.

Abstract

Translating satellite imagery into maps requires intensive effort and time, especially leading to inaccurate maps of the affected regions during disaster and conflict. The combination of availability of recent datasets and advances in computer vision made through deep learning paved the way toward automated satellite image translation. To facilitate research in this direction, we introduce the Satellite Imagery Competition using a modified SpaceNet dataset. Participants had to come up with different segmentation models to detect positions of buildings on satellite images. In this work, we present five approaches based on improvements of U-Net and Mask R-Convolutional Neuronal Networks models, coupled with unique training adaptations using boosting algorithms, morphological filter, Conditional Random Fields and custom losses. The good results—as high as (Formula presented.) and (Formula presented.) —from these models demonstrate the feasibility of Deep Learning in automated satellite image annotation.

Cite

CITATION STYLE

APA

Mohanty, S. P., Czakon, J., Kaczmarek, K. A., Pyskir, A., Tarasiewicz, P., Kunwar, S., … Schilling, M. (2020). Deep Learning for Understanding Satellite Imagery: An Experimental Survey. Frontiers in Artificial Intelligence, 3. https://doi.org/10.3389/frai.2020.534696

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free