Relative Contrastive Loss for Unsupervised Representation Learning

1Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Defining positive and negative samples is critical for learning visual variations of the semantic classes in an unsupervised manner. Previous methods either construct positive sample pairs as different data augmentations on the same image (i.e., single-instance-positive) or estimate a class prototype by clustering (i.e., prototype-positive), both ignoring the relative nature of positive/negative concepts in the real world. Motivated by the ability of humans in recognizing relatively positive/negative samples, we propose the Relative Contrastive Loss (RCL) to learn feature representation from relatively positive/negative pairs, which not only learns more real world semantic variations than the single-instance-positive methods but also respects positive-negative relativeness compared with absolute prototype-positive methods. The proposed RCL improves the linear evaluation for MoCo v3 by +2.0% on ImageNet.

Cite

CITATION STYLE

APA

Tang, S., Zhu, F., Bai, L., Zhao, R., & Ouyang, W. (2022). Relative Contrastive Loss for Unsupervised Representation Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13687 LNCS, pp. 1–18). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-19812-0_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free