Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?

16Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.

Abstract

Pre-trained vision and language models (Chen et al., 2023b,a; Dai et al., 2023; Li et al., 2023b) have demonstrated state-of-the-art capabilities over existing tasks involving images and texts, including visual question answering. However, it remains unclear whether these models possess the capability to answer questions that are not only querying visual content but knowledge-intensive and information-seeking. In this study, we introduce INFOSEEK1, a visual question answering dataset tailored for information-seeking questions that cannot be answered with only common sense knowledge. Using INFOSEEK, we analyze various pre-trained visual question answering models and gain insights into their characteristics. Our findings reveal that state-of-the-art pre-trained multi-modal models (e.g., PaLI-X, BLIP2, etc.) face challenges in answering visual information-seeking questions, but fine-tuning on the INFOSEEK dataset elicits models to use fine-grained knowledge that was learned during their pre-training. Furthermore, we show that accurate visual entity recognition can be used to improve performance on INFOSEEK by retrieving relevant documents, showing a significant space for improvement.

Cite

CITATION STYLE

APA

Chen, Y., Hu, H., Luan, Y., Sun, H., Changpinyo, S., Ritter, A., & Chang, M. W. (2023). Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions? In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 14948–14968). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.925

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free