Flexible fashion product retrieval using multimodality-based deep learning

7Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.

Abstract

Typically, fashion product searching in online shopping malls uses meta-information of the product. However, the use of meta-information is not guaranteed to ensure customer satisfaction, because of inherent limitations on the inaccuracy of input meta-information, imbalance of categories, and mis classification of apparel images. These limitations prevent the shopping mall from providing a user-desired product retrieval. This paper proposes a new fashion product search method using multimodality-based deep learning, which can support more ?exible and e?cient retrieval by combining faceted queries and fashion image-based features. A deep convolutional neural network (CNN) generates a unique feature vector of the image, and the query input by the user is vectorized throughalongshort-termmemory(LSTM)-basedrecurrentneuralnetwork(RNN).Then,thesemantic similarity between the query vector and the product image vector is calculated to obtain the best match. Three different forms of the faceted query are supported. We perform quantitative and qualitative analyses to prove the e?ectiveness and originality of the proposed approach.

Cite

CITATION STYLE

APA

Jo, Y., Wi, J., Kim, M., & Lee, J. Y. (2020). Flexible fashion product retrieval using multimodality-based deep learning. Applied Sciences (Switzerland), 10(5). https://doi.org/10.3390/app10051569

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free