Learning the compositional visual coherence for complementary recommendations

22Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Complementary recommendations, which aim at providing users product suggestions that are supplementary and compatible with their obtained items, have become a hot topic in both academia and industry in recent years. Existing work mainly focused on modeling the co-purchased relations between two items, but the compositional associations of item collections are largely unexplored. Actually, when a user chooses the complementary items for the purchased products, it is intuitive that she will consider the visual semantic coherence (such as color collocations, texture compatibilities) in addition to global impressions. Towards this end, in this paper, we propose a novel Content Attentive Neural Networks (CANN) to model the comprehensive compositional coherence on both global contents and semantic contents. Specifically, we first propose a Global Coherence Learning (GCL) module based on multi-heads attention to model the global compositional coherence. Then, we generate the semantic-focal representations from different semantic regions and design a Focal Coherence Learning (FCL) module to learn the focal compositional coherence from different semantic-focal representations. Finally, we optimize the CANN in a novel compositional optimization strategy. Extensive experiments on the large-scale real-world data clearly demonstrate the effectiveness of CANN compared with several state-of-the-art methods.

References Powered by Scopus

ImageNet: A Large-Scale Hierarchical Image Database

52485Citations
N/AReaders
Get full text

Rethinking the Inception Architecture for Computer Vision

24474Citations
N/AReaders
Get full text

Efficient graph-based image segmentation

5467Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Untargeted Attack against Federated Recommendation Systems via Poisonous Item Embeddings and the Defense

40Citations
N/AReaders
Get full text

Multimodal Dialogue Systems via Capturing Context-aware Dependencies of Semantic Elements

24Citations
N/AReaders
Get full text

Learning from Substitutable and Complementary Relations for Graph-based Sequential Product Recommendation

22Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Li, Z., Wu, B., Liu, Q., Wu, L., Zhao, H., & Mei, T. (2020). Learning the compositional visual coherence for complementary recommendations. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 3536–3543). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/489

Readers over time

‘20‘21‘22‘23‘2402468

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 7

78%

Researcher 2

22%

Readers' Discipline

Tooltip

Computer Science 9

90%

Business, Management and Accounting 1

10%

Save time finding and organizing research with Mendeley

Sign up for free
0