Efficient clothing retrieval with semantic-preserving visual phrases

46Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we address the problem of large scale cross-scenario clothing retrieval with semantic-preserving visual phrases (SPVP). Since the human parts are important cues for clothing detection and segmentation, we firstly detect human parts as the semantic context, and refine the regions of human parts with sparse background reconstruction. Then, the semantic parts are encoded into the vocabulary tree under the bag-of-visual-word (BOW) framework, and the contextual constraint of visual words among different human parts is exploited through the SPVP. Moreover, the SPVP is integrated into the inverted index structure for accelerating the retrieval process. Experiments and comparisons on our clothing dataset indicate that the SPVP significantly enhances the discriminative power of local features with a slight increase of memory usage or runtime consumption compared to the BOW model. Therefore, the approach is superior to both the state-of-the-art approach and two clothing search engines. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Fu, J., Wang, J., Li, Z., Xu, M., & Lu, H. (2013). Efficient clothing retrieval with semantic-preserving visual phrases. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7725 LNCS, pp. 420–431). https://doi.org/10.1007/978-3-642-37444-9_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free