Multi-Label Supervised Contrastive Learning

30Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Multi-label classification is an arduous problem given the complication in label correlation. Whilst sharing a common goal with contrastive learning in utilizing correlations for representation learning, how to better leverage label information remains challenging. Previous endeavors include extracting label-level presentations or mapping labels to an embedding space, overlooking the correlation between multiple labels. It exhibits a great ambiguity in determining positive samples with different extent of label overlap between samples and integrating such relations in loss functions. In our work, we propose Multi-Label Supervised Contrastive learning (MulSupCon) with a novel contrastive loss function to adjust weights based on how much overlap one sample shares with the anchor. By analyzing gradients, we explain why our method performs better under multi-label circumstances. To evaluate, we conduct direct classification and transfer learning on several multi-label datasets, including widely-used image datasets such as MS-COCO and NUS-WIDE. Validation indicates that our method outperforms the traditional multi-label classification method and shows a competitive performance when comparing to other existing approaches.

Cite

CITATION STYLE

APA

Zhang, P., & Wu, M. (2024). Multi-Label Supervised Contrastive Learning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 16786–16793). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i15.29619

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free