Multi-Source Fusion Image Semantic Segmentation Model of Generative Adversarial Networks Based on FCN

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

At present, most of the methods used in the research of image semantic segmentation ignore the low-level feature information of image, such as space, edge, etc., which leads to the problems that the segmentation of edge and small part is not precise enough and the accuracy of segmentation result is not high. To solve this problem, this paper proposes a multi-source fusion image semantic segmentation model of generative adversarial networks based on FCN: SCAGAN. In VGG19 network, add super-pixel and edge detection algorithm, and introduce the efficient spatial pyramid module to reduce the number of parameters while adding the spatial and edge information of image; Adjust the skipping structure to better integrate the low-level features and high-level features; build a generation model DeepLab-SCFCN combining with the atrous spatial pyramid pooling to better capture the feature information of different scales of the target for segmentation; The FCN with five modules is designed as the discrimination model for GAN. It is verified on the data set PASCAL VOC 2012 that the model achieves IoU of 70.1% with a small number of network layers, and the segmentation effect of edge and small part is better at the same time. This technology can be used in image semantic segmentation.

Cite

CITATION STYLE

APA

Zhao, L., Wang, Y., Duan, Z., Chen, D., & Liu, S. (2021). Multi-Source Fusion Image Semantic Segmentation Model of Generative Adversarial Networks Based on FCN. IEEE Access, 9, 101985–101993. https://doi.org/10.1109/ACCESS.2021.3097054

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free