Rethinking diversified and discriminative proposal generation for visual grounding

82Citations
Citations of this article
50Readers
Mendeley users who have this article in their library.

Abstract

Visual grounding aims to localize an object in an image referred to by a textual query phrase. Various visual grounding approaches have been proposed, and the problem can be modularized into a general framework: proposal generation, multi-modal feature representation, and proposal ranking. Of these three modules, most existing approaches focus on the latter two, with the importance of proposal generation generally neglected. In this paper, we rethink the problem of what properties make a good proposal generator. We introduce the diversity and discrimination simultaneously when generating proposals, and in doing so propose Diversified and Discriminative Proposal Networks model (DDPN). Based on the proposals generated by DDPN, we propose a high performance baseline model for visual grounding and evaluate it on four benchmark datasets. Experimental results demonstrate that our model delivers significant improvements on all the tested data-sets (e.g., 18.8% improvement on ReferItGame and 8.2% improvement on Flickr30k Entities over the existing state-of-the-arts respectively).

Cite

CITATION STYLE

APA

Yu, Z., Yu, J., Xiang, C., Zhao, Z., Tian, Q., & Tao, D. (2018). Rethinking diversified and discriminative proposal generation for visual grounding. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 1114–1120). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/155

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free