Cross-domain image retrieval with attention modeling

60Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

With the proliferation of e-commerce websites and the ubiquitousness of smart phones, cross-domain image retrieval using images taken by smart phones as queries to search products on e-commerce websites is emerging as a popular application. One challenge of this task is to locate the attention of both the query and database images. In particular, database images, e.g. of fashion products, on e-commerce websites are typically displayed with other accessories, and the images taken by users contain noisy background and large variations in orientation and lighting. Consequently, their attention is difficult to locate. In this paper, we exploit the rich tag information available on the e-commerce websites to locate the attention of database images. For query images, we use each candidate image in the database as the context to locate the query attention. Novel deep convolutional neural network architectures, namely TagYNet and CtxYNet, are proposed to learn the attention weights and then extract effective representations of the images. Experimental results on public datasets confirm that our approaches have significant improvement over the existing methods in terms of the retrieval accuracy and efficiency.

Cite

CITATION STYLE

APA

Ji, X., Wang, W., Zhang, M., & Yang, Y. (2017). Cross-domain image retrieval with attention modeling. In MM 2017 - Proceedings of the 2017 ACM Multimedia Conference (pp. 1654–1662). Association for Computing Machinery, Inc. https://doi.org/10.1145/3123266.3123429

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free