RAID: A relation-augmented image descriptor

7Citations
Citations of this article
43Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As humans, we regularly interpret scenes based on how objects are related, rather than based on the objects themselves. For example, we see a person riding an object X or a plank bridging two objects. Current methods provide limited support to search for content based on such relations. We present RAID, a relation-augmented image descriptor that supports queries based on inter-region relations. The key idea of our descriptor is to encode region-to-region relations as the spatial distribution of point-to-region relationships between two image regions. RAID allows sketch-based retrieval and requires minimal training data, thus making it suited even for querying uncommon relations. We evaluate the proposed descriptor by querying into large image databases and successfully extract nontrivial images demonstrating complex inter-region relations, which are easily missed or erroneously classified by existing methods. We assess the robustness of RAID on multiple datasets even when the region segmentation is computed automatically or very noisy.

Cite

CITATION STYLE

APA

Guerrero, P., Mitra, N. J., & Wonka, P. (2016). RAID: A relation-augmented image descriptor. In ACM Transactions on Graphics (Vol. 35). Association for Computing Machinery. https://doi.org/10.1145/2897824.2925939

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free