e-CLIP: Large-Scale Vision-Language Representation Learning in E-commerce

11Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.

Abstract

Understanding vision and language representations of product content is vital for search and recommendation applications in e-commerce. As a backbone for online shopping platforms and inspired by the recent success in representation learning research, we propose a contrastive learning framework that aligns language and visual models using unlabeled raw product text and images. We present techniques we used to train large-scale representation learning models and share solutions that address domain-specific challenges. We study the performance using our pre-trained model as backbones for diverse downstream tasks, including category classification, attribute extraction, product matching, product clustering, and adult product recognition. Experimental results show that our proposed method outperforms the baseline in each downstream task regarding both single modality and multiple modalities.

Cite

CITATION STYLE

APA

Shin, W., Park, J., Woo, T., Cho, Y., Oh, K., & Song, H. (2022). e-CLIP: Large-Scale Vision-Language Representation Learning in E-commerce. In International Conference on Information and Knowledge Management, Proceedings (pp. 3484–3494). Association for Computing Machinery. https://doi.org/10.1145/3511808.3557067

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free