Attention-guided Multi-step Fusion: A Hierarchical Fusion Network for Multimodal Recommendation

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The main idea of multimodal recommendation is the rational utilization of the item's multimodal information to improve the recommendation performance. Previous works directly integrate item multimodal features with item ID embeddings, ignoring the inherent semantic relations contained in the multimodal features. In this paper, we propose a novel and effective aTtention-guided Multi-step FUsion Network for multimodal recommendation, named TMFUN. Specifically, our model first constructs modality feature graph and item feature graph to model the latent item-item semantic structures. Then, we use the attention module to identify inherent connections between user-item interaction data and multimodal data, evaluate the impact of multimodal data on different interactions, and achieve early-step fusion of item features. Furthermore, our model optimizes item representation through the attention-guided multi-step fusion strategy and contrastive learning to improve recommendation performance. The extensive experiments on three real-world datasets show that our model has superior performance compared to the state-of-the-art models.

Cite

CITATION STYLE

APA

Zhou, Y., Guo, J., Sun, H., Song, B., & Yu, F. R. (2023). Attention-guided Multi-step Fusion: A Hierarchical Fusion Network for Multimodal Recommendation. In SIGIR 2023 - Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 1816–1820). Association for Computing Machinery, Inc. https://doi.org/10.1145/3539618.3591950

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free