Unsupervised Image Feature Extraction Based on Scattering Transform and Self-supervised Learning with Highly Training Efficiency

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Convolutional neural networks (CNN) can effectively extract high-level semantic features of images, however, to learn these features, a large number of labelled data is required. In order to extract image features without labelled data, this paper proposes an unsupervised image feature extraction method based on self-supervised learning. To learn image features, we train the neural network to identify the two-dimensional rotation applied to the image. The first few layers of the convolution network are replaced with a scattering network to speed up training process, and get good image features in the last few layers of the convolutional network. We input the extracted features into convolutional network for supervised learning, and take recognition accuracy as a criterion of feature validity. The experimental result show that the recognition accuracy gets 84% on CIFAR10, reaching the mainstream results of unsupervised method; and 62.17% on CIFAR100, which is very close to the rate of supervised learning. This method can be applied to the applications that do not have massive labelled training data and have limited computing resources.

Cite

CITATION STYLE

APA

Zheng, G., & Zhu, Q. (2019). Unsupervised Image Feature Extraction Based on Scattering Transform and Self-supervised Learning with Highly Training Efficiency. In Journal of Physics: Conference Series (Vol. 1237). Institute of Physics Publishing. https://doi.org/10.1088/1742-6596/1237/3/032044

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free