Deep Residual Texture Network for Terrain Recognition

N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In terrain recognition, the terrain image not only has texture features but also contains spatial features. However, the traditional feature descriptors mainly focus on the texture features of terrain image, ignoring the spatial features of the image, and the convolutional neural network (CNN) can extract the spatial features of the image well due to the role of convolutional layer and the pooling layer. So how to extract these two features at the same time is a challenging problem. In this paper, we introduce a deep residual texture network (DrtNet) that builds a texture detail layer in the residual convolution network and becomes an end-to-end learning network. DrtNet, which simultaneously extracts the spatial geometric features and texture detail features of the terrain image, can successfully combine the traditional texture feature descriptor with the convolutional neural network. The experimental results show that the DrtNet achieves the accuracy of 97.85% on the SDU_Terrain16 dataset that was created by us and outperforms other traditional methods and current popular deep convolutional networks. In addition, DrtNet has achieved good results on two other material/texture datasets (GTOS and DTD).

Cite

CITATION STYLE

APA

Song, P., Ma, X., Li, X., & Li, Y. (2019). Deep Residual Texture Network for Terrain Recognition. IEEE Access, 7, 90152–90161. https://doi.org/10.1109/ACCESS.2019.2926994

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free