Large-scale pre-trained text-image models with dual-encoder architectures (such as CLIP (Rad-ford et al., 2021)) are typically adopted for various vision-language applications, including text-image retrieval. However, these models are still less practical on edge devices or for real-time situations, due to the substantial indexing and inference time and the large consumption of computational resources. Although knowledge distillation techniques have been widely utilized for uni-modal model compression, how to expand them to the situation when the numbers of modalities and teachers/students are doubled has been rarely studied. In this paper, we conduct comprehensive experiments on this topic and propose the fully-Connected knowledge interaction graph (Cona) technique for cross-modal pre-training distillation. Based on our fndings, the resulting ConaCLIP achieves SOTA performances on the widely-used Flickr30K and MSCOCO benchmarks under the lightweight setting. An industry application of our method on an e-commercial platform further demonstrates the signifcant effectiveness of ConaCLIP.1.
CITATION STYLE
Wang, J., Wang, C., Wang, X., Huang, J., & Jin, L. (2023). ConaCLIP: Exploring Distillation of Fully-Connected Knowledge Interaction Graph for Lightweight Text-Image Retrieval. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 5, pp. 71–80). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-industry.8
Mendeley helps you to discover research relevant for your work.