ConaCLIP: Exploring Distillation of Fully-Connected Knowledge Interaction Graph for Lightweight Text-Image Retrieval

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Large-scale pre-trained text-image models with dual-encoder architectures (such as CLIP (Rad-ford et al., 2021)) are typically adopted for various vision-language applications, including text-image retrieval. However, these models are still less practical on edge devices or for real-time situations, due to the substantial indexing and inference time and the large consumption of computational resources. Although knowledge distillation techniques have been widely utilized for uni-modal model compression, how to expand them to the situation when the numbers of modalities and teachers/students are doubled has been rarely studied. In this paper, we conduct comprehensive experiments on this topic and propose the fully-Connected knowledge interaction graph (Cona) technique for cross-modal pre-training distillation. Based on our fndings, the resulting ConaCLIP achieves SOTA performances on the widely-used Flickr30K and MSCOCO benchmarks under the lightweight setting. An industry application of our method on an e-commercial platform further demonstrates the signifcant effectiveness of ConaCLIP.1.

Cite

CITATION STYLE

APA

Wang, J., Wang, C., Wang, X., Huang, J., & Jin, L. (2023). ConaCLIP: Exploring Distillation of Fully-Connected Knowledge Interaction Graph for Lightweight Text-Image Retrieval. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 5, pp. 71–80). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-industry.8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free