We propose a framework that harnesses visual cues in an unsupervised manner to learn the co-occurrence distribution of items in real-world images for complementary recommendation. Our model learns a non-linear transformation between the two manifolds of source and target item categories (e.g., tops and bottoms in outfits). Given a large dataset of images containing instances of co-occurring items, we train a generative transformer network directly on the feature representation by casting it as an adversarial optimization problem. Such a conditional generative model can produce multiple novel samples of complementary items (in the feature space) for a given query item. We demonstrate our framework for the task of recommending complementary top apparel for a given bottom clothing item. The recommendations made by our system are diverse, and are favored by human experts over the baseline approaches.
CITATION STYLE
Huynh, C. P., Ciptadi, A., Tyagi, A., & Agrawal, A. (2019). CRAFT: Complementary recommendation by adversarial feature transform. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11131 LNCS, pp. 54–66). Springer Verlag. https://doi.org/10.1007/978-3-030-11015-4_7
Mendeley helps you to discover research relevant for your work.