Pretrained Vision-Language Models (VLMs) have achieved remarkable performance in image retrieval from text. However, their performance drops drastically when confronted with linguistically complex texts that they struggle to comprehend. Inspired by the Divide-and-Conquer (Smith, 1985) algorithm and dual-process theory (Groves and Thompson, 1970), in this paper, we regard linguistically complex texts as compound proposition texts composed of multiple simple proposition sentences and propose an end-to-end Neural Divide-and-Conquer Reasoning framework, dubbed NDCR. It contains three main components: 1) Divide: a proposition generator divides the compound proposition text into simple proposition sentences and produces their corresponding representations, 2) Conquer: a pretrained VLMs-based visual-linguistic interactor achieves the interaction between decomposed proposition sentences and images, 3) Combine: a neural-symbolic reasoner combines the above reasoning states to obtain the final solution via a neural logic reasoning approach. According to the dual-process theory, the visual-linguistic interactor and neural-symbolic reasoner could be regarded as analogical reasoning System 1 and logical reasoning System 2. We conduct extensive experiments on a challenging image retrieval from contextual descriptions data set. Experimental results and analyses indicate NDCR significantly improves performance in the complex image-text reasoning problem. Code link: https://github.com/YunxinLi/NDCR.
CITATION STYLE
Li, Y., Hu, B., Ding, Y., Ma, L., & Zhang, M. (2023). A Neural Divide-and-Conquer Reasoning Framework for Image Retrieval from Linguistically Complex Text. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 16464–16476). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.909
Mendeley helps you to discover research relevant for your work.