DetGPT: Detect What You Need via Reasoning

13Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently, vision-language models (VLMs) such as GPT4, LLAVA, and MiniGPT4 have witnessed remarkable breakthroughs, which are great at generating image descriptions and visual question answering. However, it is difficult to apply them to an embodied agent for completing real-world tasks, such as grasping, since they can not localize the object of interest. In this paper, we introduce a new task termed reasoning-based object detection, which aims at localizing the objects of interest in the visual scene based on any human instructs. Our proposed method, called DetGPT, leverages instruction-tuned VLMs to perform reasoning and find the object of interest, followed by an open-vocabulary object detector to localize these objects. DetGPT can automatically locate the object of interest based on the user's expressed desires, even if the object is not explicitly mentioned. This ability makes our system potentially applicable across a wide range of fields, from robotics to autonomous driving. To facilitate research in the proposed reasoning-based object detection, we curate and open-source a benchmark named RD-Bench for instruction tuning and evaluation. Overall, our proposed task and DetGPT demonstrate the potential for more sophisticated and intuitive interactions between humans and machines.

Cite

CITATION STYLE

APA

Pi, R., Gao, J., Diao, S., Pan, R., Dong, H., Zhang, J., … Zhang, T. (2023). DetGPT: Detect What You Need via Reasoning. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 14172–14189). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.876

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free