Sherlock: Scalable fact learning in images

13Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

Abstract

The human visual system is capable of learning an unbounded number of facts from images including not only objects but also their attributes, actions and interactions. Such uniform understanding of visual facts has not received enough attention. Existing visual recognition systems are typically modeled differently for each fact type such as objects, actions, and interactions. We propose a setting where all these facts can be modeled simultaneously with a capacity to understand an unbounded number of facts in a structured way. The training data comes as structured facts in images, including (1) objects (e.g.,), (2) attributes (e.g.,), (3) actions (e.g.,), and (4) interactions (e.g.,). Each fact has a language view (e.g., < boy, playing>) and a visual view (an image). We show that learning visual facts in a structured way enables not only a uniform but also generalizable visual understanding. We propose and investigate recent and strong approaches from the multiview learning literature and also introduce a structured embedding model. We applied the investigated methods on several datasets that we augmented with structured facts and a large scale dataset of > 202, 000 facts and 814, 000 images. Our results show the advantage of relating facts by the structure by the proposed model compared to the baselines.

Cite

CITATION STYLE

APA

Elhoseiny, M., Cohen, S., Chang, W., Price, B., & Elgammal, A. (2017). Sherlock: Scalable fact learning in images. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 4016–4024). AAAI press. https://doi.org/10.1609/aaai.v31i1.11214

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free