Automatic building detection from high-resolution satellite imaging images has many applications. Understanding socioeconomic development and keeping track of population migrations are essential for effective civic planning. These civil feature systems may also help update maps after natural disasters or in geographic regions undergoing dramatic population expansion. To accomplish the desired goal, a variety of image processing techniques were employed. They are often inaccurate or take a long time to process. Convolutional neural networks (CNNs) are being designed to extract buildings from satellite images, based on the U-Net, which was first developed to segment medical images. The minimal number of images from the open dataset, in RGB format with variable shapes, reveals one of the advantages of the U-Net; that is, it develops excellent accuracy from a limited amount of training material with minimal effort and training time. The encoder portion of U-Net was altered to test the feasibility of using a transfer learning facility. VGGNet and ResNet were both used for the same purpose. The findings of these models were also compared to our own bespoke U-Net, which was designed from the ground up. With an accuracy of 84.9%, the VGGNet backbone was shown to be the best feature extractor. Compared to the current best models for tackling a similar problem with a larger dataset, the present results are considered superior.
CITATION STYLE
Alsabhan, W., Alotaiby, T., & Dudin, B. (2022). Detecting Buildings and Nonbuildings from Satellite Images Using U-Net. Computational Intelligence and Neuroscience, 2022. https://doi.org/10.1155/2022/4831223
Mendeley helps you to discover research relevant for your work.