Answer Distillation for Visual Question Answering

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Answering open-ended questions in Visual Question Answering (VQA) is a challenging task. As the answers are totally free-form, the answer space for open-ended questions is infinite in theory. This increases the difficulty for algorithms to predict the correct answers. In this paper, we propose a method named answer distillation to decrease the scale of answer space and limit the correct result into a small set of answer candidates. Specifically, we design a two-stage architecture to answer a question: First, we develop an answer distillation network to distill the answers, converting an open-ended question to a multiple-choice one with a short list of answer candidates. Then, we make full use of the knowledge from the answer candidates to guide the visual attention and refine the prediction results. Extensive experiments are conducted to validate the effectiveness of our answer distillation architecture. The results show that our method can effectively compress the answer space and improve the accuracy on open-ended task, providing a new state-of-the-art performance on COCO-VQA dataset.

Cite

CITATION STYLE

APA

Fang, Z., Liu, J., Tang, Q., Li, Y., & Lu, H. (2019). Answer Distillation for Visual Question Answering. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11361 LNCS, pp. 72–87). Springer Verlag. https://doi.org/10.1007/978-3-030-20887-5_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free