Comparison of Fully Convolutional Networks (FCN) and U-Net for Road Segmentation from High Resolution Imageries

  • OZTURK O
  • SARITÜRK B
  • SEKER D
N/ACitations
Citations of this article
57Readers
Mendeley users who have this article in their library.

Abstract

Segmentation is one of the most popular classification techniques which still have semantic labels. In this context, the segmentation of different objects such as cars, airplanes, ships, and buildings that are independent of background and objects such as land use and vegetation classes, which are difficult to discriminate from the background is considered. However, in image segmentation studies, various difficulties such as shadow, image blockage, a disorder of background, lighting, shading that cause fundamental modifications in the appearance of features are often encountered. With the development of technology, obtaining high spatial resolution satellite imageries and aerial photographs contain detailed texture information have been facilitated easily. Parallel to these improvements, deep learning architectures have widely been used to solved several computer vision tasks with an increasing level of difficulty. Thus, the regional characteristics, artificial and natural objects, can be perceived and interpreted precisely. In this study, two different subset data that were produced from a great open-source labeled image sets were used to segmentation of roads. The used labeled data set consists of 150 satellite images of size 1500 x 1500 pixels at a 1.2 m resolution, which was not efficient for training. In order to avoid any problem, the imageries were divided into smaller dimensions. Selected images from the data set divided into small patches of 256 x 256 pixels and 512 x 512 pixels to train the system, and comparisons between them were carried out. To train the system using these datasets, two different artificial neural network architectures U-Net and Fully Convolutional Networks (FCN), which are used for object segmentation on high-resolution images, were selected. When the test data with the same size as the training data set were analyzed, approximately 97% extraction accuracy was obtained from high-resolution imageries trained by FCN in 512 x 512 dimensions.

Cite

CITATION STYLE

APA

OZTURK, O., SARITÜRK, B., & SEKER, D. Z. (2020). Comparison of Fully Convolutional Networks (FCN) and U-Net for Road Segmentation from High Resolution Imageries. International Journal of Environment and Geoinformatics, 7(3), 272–279. https://doi.org/10.30897/ijegeo.737993

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free