Picture it in your mind: generating high level visual representations from textual descriptions

1Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we tackle the problem of image search when the query is a short textual description of the image the user is looking for. We choose to implement the actual search process as a similarity search in a visual feature space, by learning to translate a textual query into a visual representation. Searching in the visual feature space has the advantage that any update to the translation model does not require to reprocess the, typically huge, image collection on which the search is performed. We propose Text2Vis, a neural network that generates a visual representation, in the visual feature space of the fc6-fc7 layers of ImageNet, from a short descriptive text. Text2Vis optimizes two loss functions, using a stochastic loss-selection method. A visual-focused loss is aimed at learning the actual text-to-visual feature mapping, while a text-focused loss is aimed at modeling the higher-level semantic concepts expressed in language and countering the overfit on non-relevant visual components of the visual loss. We report preliminary results on the MS-COCO dataset.

Cite

CITATION STYLE

APA

Carrara, F., Esuli, A., Fagni, T., Falchi, F., & Moreo Fernández, A. (2018). Picture it in your mind: generating high level visual representations from textual descriptions. Information Retrieval Journal, 21(2–3), 208–229. https://doi.org/10.1007/s10791-017-9318-6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free