Self-Supervised Feature Enhancement: Applying Internal Pretext Task to Supervised Learning

1Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Traditional self-supervised learning requires convolutional neural networks (CNNs) using external pretext tasks (i.e., image- or video-based tasks) to encode high-level semantic visual representations. In this paper, we show that feature transformations within CNNs can also be regarded as supervisory signals to construct the self-supervised task, called internal pretext task. And such a task can be applied for the enhancement of supervised learning. Specifically, we first transform the internal feature maps by discarding different channels, and then define an additional internal pretext task to identify the discarded channels. CNNs are trained to predict the joint labels generated by the combination of self-supervised labels and original labels. By doing so, we let CNNs know which channels are missing while classifying in the hope to mine richer feature information. Extensive experiments show that our approach is effective on various models and datasets. And it's worth noting that we only incur negligible computational overhead. Furthermore, our approach can also be compatible with other methods to get better results.

References Powered by Scopus

Deep residual learning for image recognition

174312Citations
N/AReaders
Get full text

Densely connected convolutional networks

28581Citations
N/AReaders
Get full text

Rich feature hierarchies for accurate object detection and semantic segmentation

26286Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Local feature matching from detector-based to detector-free: a survey

2Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Xie, T., Yang, Y., Ding, Z., Cheng, X., Wang, X., Gong, H., & Liu, M. (2023). Self-Supervised Feature Enhancement: Applying Internal Pretext Task to Supervised Learning. IEEE Access, 11, 1708–1717. https://doi.org/10.1109/ACCESS.2022.3233104

Readers over time

‘21‘22‘23‘2402468

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 6

86%

Lecturer / Post doc 1

14%

Readers' Discipline

Tooltip

Computer Science 6

100%

Article Metrics

Tooltip
Mentions
News Mentions: 1

Save time finding and organizing research with Mendeley

Sign up for free
0