Multi-Modal Alignment of Visual Question Answering Based on Multi-Hop Attention Mechanism

5Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

The alignment of information between the image and the question is of great significance in the visual question answering (VQA) task. Self-attention is commonly used to generate attention weights between image and question. These attention weights can align two modalities. Through the attention weight, the model can select the relevant area of the image to align with the question. However, when using the self-attention mechanism, the attention weight between two objects is only determined by the representation of these two objects. It ignores the influence of other objects around these two objects. This contribution proposes a novel multi-hop attention alignment method that enriches surrounding information when using self-attention to align two modalities. Simultaneously, in order to utilize position information in alignment, we also propose a position embedding mechanism. The position embedding mechanism extracts the position information of each object and implements the position embedding mechanism to align the question word with the correct position in the image. According to the experiment on the VQA2.0 dataset, our model achieves validation accuracy of 65.77%, outperforming several state-of-the-art methods. The experimental result shows that our proposed methods have better performance and effectiveness.

Cite

CITATION STYLE

APA

Xia, Q., Yu, C., Hou, Y., Peng, P., Zheng, Z., & Chen, W. (2022). Multi-Modal Alignment of Visual Question Answering Based on Multi-Hop Attention Mechanism. Electronics (Switzerland), 11(11). https://doi.org/10.3390/electronics11111778

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free