Negative Sample is Negative in Its Own Way: Tailoring Negative Sentences for Image-Text Retrieval

3Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

Abstract

Matching model is essential for Image-Text Retrieval framework. Existing research usually train the model with a triplet loss and explore various strategy to retrieve hard negative sentences in the dataset. We argue that current retrieval-based negative sample construction approach is limited in the scale of the dataset thus fail to identify negative sample of high difficulty for every image. We propose our TAiloring neGative Sentences with Discrimination and Correction (TAGS-DC) to generate synthetic sentences automatically as negative samples. TAGS-DC is composed of masking and refilling to generate synthetic negative sentences with higher difficulty. To keep the difficulty during training, we mutually improve the retrieval and generation through parameter sharing. To further utilize fine-grained semantic of mismatch in the negative sentence, we propose two auxiliary tasks, namely word discrimination and word correction to improve the training. In experiments, we verify the effectiveness of our model on MS-COCO and Flickr30K compared with current state-of-theart models and demonstrates its robustness and faithfulness in the further analysis.

Cite

CITATION STYLE

APA

Fan, Z., Wei, Z., Li, Z., Wang, S., Huang, X., & Fan, J. (2022). Negative Sample is Negative in Its Own Way: Tailoring Negative Sentences for Image-Text Retrieval. In Findings of the Association for Computational Linguistics: NAACL 2022 - Findings (pp. 2667–2678). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-naacl.204

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free