This paper presents a simple unsupervised visual representation learning method with a pretext task of discriminating all images in a dataset using a parametric, instance-level classifier. The overall framework is a replica of a supervised classification model, where semantic classes (e.g., dog, bird, and ship) are replaced by instance IDs. However, scaling up the classification task from thousands of semantic labels to millions of instance labels brings specific challenges including 1) the large-scale softmax computation; 2) the slow convergence due to the infrequent visiting of instance samples; and 3) the massive number of negative classes that can be noisy. This work presents several novel techniques to handle these difficulties. First, we introduce a hybrid parallel training framework to make large-scale training feasible. Second, we present a raw-feature initialization mechanism for classification weights, which we assume offers a contrastive prior for instance discrimination and can clearly speed up converge in our experiments. Finally, we propose to smooth the labels of a few hardest classes to avoid optimizing over very similar negative pairs. While being conceptually simple, our framework achieves competitive or superior performance compared to state-of-the-art unsupervised approaches, i.e., SimCLR, MoCoV2, and PIC under ImageNet linear evaluation protocol and on several downstream visual tasks, verifying that full instance classification is a strong pretraining technique for many semantic visual tasks.
CITATION STYLE
Liu, Y., Huang, L., Pan, P., Wang, B., Xu, Y., & Jin, R. (2021). Train a One-Million-Way Instance Classifier for Unsupervised Visual Representation Learning. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 10A, pp. 8706–8714). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i10.17055
Mendeley helps you to discover research relevant for your work.