Improving Visual-Semantic Embedding with Adaptive Pooling and Optimization Objective

0Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Visual-Semantic Embedding (VSE) aims to learn an embedding space where related visual and semantic instances are close to each other. Recent VSE models tend to design complex structures to pool visual and semantic features into fixed-length vectors and use hard triplet loss for optimization. However, we find that: (1) combining simple pooling methods is no worse than these sophisticated methods; and (2) only considering the most difficult-to-distinguish negative sample leads to slow convergence and poor Recall@K improvement. To this end, we propose an adaptive pooling strategy that allows the model to learn how to aggregate features through a combination of simple pooling methods. We also introduce a strategy to dynamically select a group of negative samples to make the optimization converge faster and perform better. Experimental results on Flickr30K and MS-COCO demonstrate that a standard VSE using our pooling and optimization strategies outperforms current state-of-the-art systems (at least 1.0% on the metrics of recall) in image-to-text and text-to-image retrieval. Source code of our experiments is available at https://github.com/96-Zachary/vse_2ad.

Cite

CITATION STYLE

APA

Zhang, Z., Shu, C., Xiao, Y., Shen, Y., Zhu, D., Xiao, J., … Lu, Z. (2023). Improving Visual-Semantic Embedding with Adaptive Pooling and Optimization Objective. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 1209–1221). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.eacl-main.87

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free