MultiQG-TI: Towards Question Generation from Multi-modal Sources

2Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

We study the new problem of automatic question generation (QG) from multi-modal sources containing images and texts, significantly expanding the scope of most of the existing work that focuses exclusively on QG from only textual sources. We propose a simple solution for our new problem, called MultiQG-TI, which enables a text-only question generator to process visual input in addition to textual input. Specifically, we leverage an image-to-text model and an optical character recognition model to obtain the textual description of the image and extract any texts in the image, respectively, and then feed them together with the input texts to the question generator. We only fine-tune the question generator while keeping the other components fixed. On the challenging ScienceQA dataset, we demonstrate that MultiQG-TI significantly outperforms ChatGPT with few-shot prompting, despite having hundred-times less trainable parameters. Additional analyses empirically confirm the necessity of both visual and textual signals for QG and show the impact of various modeling choices. Code is available at https://rb.gy/020tw.

Cite

CITATION STYLE

APA

Wang, Z., & Baraniuk, R. G. (2023). MultiQG-TI: Towards Question Generation from Multi-modal Sources. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 682–691). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.bea-1.55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free