Object-less Vision-language Model on Visual Question Classification for Blind People

4Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite the long-standing appearance of question types in the Visual Question Answering dataset, Visual Question Classification does not received enough public interest in research. Different from general text classification, a visual question requires an understanding of visual and textual features simultaneously. Together with the enthusiasm and novelty of Visual Question Classification, the most important and practical goal we concentrate on is to deal with the weakness of Object Detection on object-less images. We thus propose an Object-less Visual Question Classification model, OL–LXMERT, to generate virtual objects replacing the dependence of Object Detection in previous Vision-Language systems. Our architecture is effective and powerful enough to digest local and global features of images in understanding the relationship between multiple modalities. Through our experiments in our modified VizWiz-VQC 2020 dataset of blind people, our Object-less LXMERT achieves promising results in the brand-new multi-modal task. Furthermore, the detailed ablation studies show the strength and potential of our model in comparison to competitive approaches.

References Powered by Scopus

Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering

4084Citations
N/AReaders
Get full text

Graph convolutional networks for text classification

1737Citations
N/AReaders
Get full text

Making the V in VQA matter: Elevating the role of image understanding in visual question answering

1574Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Object Detection for the Visually Impaired: A Systematic Literature Review

3Citations
N/AReaders
Get full text

A Priority Map for Vision-and-Language Navigation with Trajectory Plans and Feature-Location Cues

2Citations
N/AReaders
Get full text

RVT-Transformer: Residual Attention in Answerability Prediction on Visual Question Answering for Blind People

1Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Le, T., Pho, K., Bui, T., Nguyen, H. T., & Le Nguyen, M. (2022). Object-less Vision-language Model on Visual Question Classification for Blind People. In International Conference on Agents and Artificial Intelligence (Vol. 3, pp. 180–187). Science and Technology Publications, Lda. https://doi.org/10.5220/0010797400003116

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 2

67%

Researcher 1

33%

Readers' Discipline

Tooltip

Computer Science 2

67%

Arts and Humanities 1

33%

Save time finding and organizing research with Mendeley

Sign up for free