Picture it in your mind: generating high level visual representations from textual descriptions

17Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper we tackle the problem of image search when the query is a short textual description of the image the user is looking for. We choose to implement the actual search process as a similarity search in a visual feature space, by learning to translate a textual query into a visual representation. Searching in the visual feature space has the advantage that any update to the translation model does not require to reprocess the (typically huge) image collection on which the search is performed. We propose various neural network models of increasing complexity that learn to generate, from a short descriptive text, a high level visual representation in a visual feature space such as the pool5 layer of the ResNet-152 or the fc6–fc7 layers of an AlexNet trained on ILSVRC12 and Places databases. The Text2Vis models we explore include (1) a relatively simple regressor network relying on a bag-of-words representation for the textual descriptors, (2) a deep recurrent network that is sensible to word order, and (3) a wide and deep model that combines a stacked LSTM deep network with a wide regressor network. We compare the models we propose with other search strategies, also including textual search methods that exploit state-of-the-art caption generation models to index the image collection.

Cite

CITATION STYLE

APA

Carrara, F., Esuli, A., Fagni, T., Falchi, F., & Moreo Fernández, A. (2018). Picture it in your mind: generating high level visual representations from textual descriptions. Information Retrieval Journal, 21(2–3), 208–229. https://doi.org/10.1007/s10791-017-9318-6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free